Test Report: KVM_Linux_crio 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-14:36202
                    
                

Test fail (30/312)

Order failed test Duration
33 TestAddons/parallel/Registry 74.15
34 TestAddons/parallel/Ingress 155.78
36 TestAddons/parallel/MetricsServer 329.03
164 TestMultiControlPlane/serial/StopSecondaryNode 141.97
166 TestMultiControlPlane/serial/RestartSecondaryNode 51.04
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.84
171 TestMultiControlPlane/serial/StopCluster 141.73
231 TestMultiNode/serial/RestartKeepsNodes 323.73
233 TestMultiNode/serial/StopMultiNode 141.36
240 TestPreload 203.92
248 TestKubernetesUpgrade 365.74
285 TestPause/serial/SecondStartNoReconfiguration 241.45
319 TestStartStop/group/old-k8s-version/serial/FirstStart 285.44
339 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.13
347 TestStartStop/group/no-preload/serial/Stop 139.26
356 TestStartStop/group/embed-certs/serial/Stop 139.12
357 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 94.73
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 723.69
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.17
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.32
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.11
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.6
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 544.2
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 384.35
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 391.56
375 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 153.01
x
+
TestAddons/parallel/Registry (74.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.236086ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005284432s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003460074s
addons_test.go:342: (dbg) Run:  kubectl --context addons-473197 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-473197 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-473197 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.091938496s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-473197 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 ip
2024/09/13 23:38:45 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-473197 -n addons-473197
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 logs -n 25: (1.413159012s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-551384                                                                     | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| start   | -o=json --download-only                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | -p download-only-763760                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-763760                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-551384                                                                     | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-763760                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | binary-mirror-510431                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40845                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510431                                                                     | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-473197 --wait=true                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:37 UTC | 13 Sep 24 23:37 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-473197 ssh cat                                                                       | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-473197 ip                                                                            | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:27:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:27:19.727478   13355 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:27:19.727577   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727584   13355 out.go:358] Setting ErrFile to fd 2...
	I0913 23:27:19.727589   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727825   13355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:27:19.728488   13355 out.go:352] Setting JSON to false
	I0913 23:27:19.729317   13355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":586,"bootTime":1726269454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:27:19.729406   13355 start.go:139] virtualization: kvm guest
	I0913 23:27:19.731822   13355 out.go:177] * [addons-473197] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:27:19.733210   13355 notify.go:220] Checking for updates...
	I0913 23:27:19.733237   13355 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:27:19.734712   13355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:27:19.735976   13355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:27:19.737182   13355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:19.738438   13355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:27:19.739925   13355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:27:19.741131   13355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:27:19.775615   13355 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 23:27:19.777213   13355 start.go:297] selected driver: kvm2
	I0913 23:27:19.777235   13355 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:27:19.777247   13355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:27:19.777996   13355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.778088   13355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:27:19.793811   13355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:27:19.793861   13355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:27:19.794087   13355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:27:19.794117   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:19.794161   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:19.794171   13355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:27:19.794217   13355 start.go:340] cluster config:
	{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:19.794313   13355 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.796337   13355 out.go:177] * Starting "addons-473197" primary control-plane node in "addons-473197" cluster
	I0913 23:27:19.797380   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:19.797422   13355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:19.797444   13355 cache.go:56] Caching tarball of preloaded images
	I0913 23:27:19.797531   13355 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:27:19.797549   13355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:27:19.797846   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:19.797865   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json: {Name:mkc3a28348c95a05c47c4230656de6866b98328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:19.798004   13355 start.go:360] acquireMachinesLock for addons-473197: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:27:19.798046   13355 start.go:364] duration metric: took 28.71µs to acquireMachinesLock for "addons-473197"
	I0913 23:27:19.798062   13355 start.go:93] Provisioning new machine with config: &{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:27:19.798113   13355 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 23:27:19.799714   13355 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 23:27:19.799890   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:27:19.799928   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:27:19.814905   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0913 23:27:19.815364   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:27:19.815966   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:27:19.815989   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:27:19.816395   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:27:19.816630   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:19.816779   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:19.816997   13355 start.go:159] libmachine.API.Create for "addons-473197" (driver="kvm2")
	I0913 23:27:19.817032   13355 client.go:168] LocalClient.Create starting
	I0913 23:27:19.817080   13355 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:27:19.909228   13355 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:27:19.970689   13355 main.go:141] libmachine: Running pre-create checks...
	I0913 23:27:19.970714   13355 main.go:141] libmachine: (addons-473197) Calling .PreCreateCheck
	I0913 23:27:19.971194   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:19.971662   13355 main.go:141] libmachine: Creating machine...
	I0913 23:27:19.971677   13355 main.go:141] libmachine: (addons-473197) Calling .Create
	I0913 23:27:19.971844   13355 main.go:141] libmachine: (addons-473197) Creating KVM machine...
	I0913 23:27:19.973234   13355 main.go:141] libmachine: (addons-473197) DBG | found existing default KVM network
	I0913 23:27:19.974016   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:19.973849   13377 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0913 23:27:19.974095   13355 main.go:141] libmachine: (addons-473197) DBG | created network xml: 
	I0913 23:27:19.974122   13355 main.go:141] libmachine: (addons-473197) DBG | <network>
	I0913 23:27:19.974136   13355 main.go:141] libmachine: (addons-473197) DBG |   <name>mk-addons-473197</name>
	I0913 23:27:19.974149   13355 main.go:141] libmachine: (addons-473197) DBG |   <dns enable='no'/>
	I0913 23:27:19.974157   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974171   13355 main.go:141] libmachine: (addons-473197) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 23:27:19.974179   13355 main.go:141] libmachine: (addons-473197) DBG |     <dhcp>
	I0913 23:27:19.974184   13355 main.go:141] libmachine: (addons-473197) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 23:27:19.974189   13355 main.go:141] libmachine: (addons-473197) DBG |     </dhcp>
	I0913 23:27:19.974194   13355 main.go:141] libmachine: (addons-473197) DBG |   </ip>
	I0913 23:27:19.974216   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974226   13355 main.go:141] libmachine: (addons-473197) DBG | </network>
	I0913 23:27:19.974233   13355 main.go:141] libmachine: (addons-473197) DBG | 
	I0913 23:27:19.980176   13355 main.go:141] libmachine: (addons-473197) DBG | trying to create private KVM network mk-addons-473197 192.168.39.0/24...
	I0913 23:27:20.045910   13355 main.go:141] libmachine: (addons-473197) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.045940   13355 main.go:141] libmachine: (addons-473197) DBG | private KVM network mk-addons-473197 192.168.39.0/24 created
	I0913 23:27:20.045954   13355 main.go:141] libmachine: (addons-473197) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:27:20.046047   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.045834   13377 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.046087   13355 main.go:141] libmachine: (addons-473197) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:27:20.298677   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.298568   13377 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa...
	I0913 23:27:20.458808   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458662   13377 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk...
	I0913 23:27:20.458837   13355 main.go:141] libmachine: (addons-473197) DBG | Writing magic tar header
	I0913 23:27:20.458849   13355 main.go:141] libmachine: (addons-473197) DBG | Writing SSH key tar header
	I0913 23:27:20.458859   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458774   13377 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.458873   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197
	I0913 23:27:20.458907   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 (perms=drwx------)
	I0913 23:27:20.458937   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:27:20.458947   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:27:20.458964   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.458975   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:27:20.458985   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:27:20.459015   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:27:20.459028   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:27:20.459044   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:27:20.459058   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:27:20.459067   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home
	I0913 23:27:20.459081   13355 main.go:141] libmachine: (addons-473197) DBG | Skipping /home - not owner
	I0913 23:27:20.459096   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:27:20.459111   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:20.459993   13355 main.go:141] libmachine: (addons-473197) define libvirt domain using xml: 
	I0913 23:27:20.460017   13355 main.go:141] libmachine: (addons-473197) <domain type='kvm'>
	I0913 23:27:20.460026   13355 main.go:141] libmachine: (addons-473197)   <name>addons-473197</name>
	I0913 23:27:20.460037   13355 main.go:141] libmachine: (addons-473197)   <memory unit='MiB'>4000</memory>
	I0913 23:27:20.460042   13355 main.go:141] libmachine: (addons-473197)   <vcpu>2</vcpu>
	I0913 23:27:20.460054   13355 main.go:141] libmachine: (addons-473197)   <features>
	I0913 23:27:20.460079   13355 main.go:141] libmachine: (addons-473197)     <acpi/>
	I0913 23:27:20.460098   13355 main.go:141] libmachine: (addons-473197)     <apic/>
	I0913 23:27:20.460109   13355 main.go:141] libmachine: (addons-473197)     <pae/>
	I0913 23:27:20.460119   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460142   13355 main.go:141] libmachine: (addons-473197)   </features>
	I0913 23:27:20.460165   13355 main.go:141] libmachine: (addons-473197)   <cpu mode='host-passthrough'>
	I0913 23:27:20.460178   13355 main.go:141] libmachine: (addons-473197)   
	I0913 23:27:20.460200   13355 main.go:141] libmachine: (addons-473197)   </cpu>
	I0913 23:27:20.460208   13355 main.go:141] libmachine: (addons-473197)   <os>
	I0913 23:27:20.460213   13355 main.go:141] libmachine: (addons-473197)     <type>hvm</type>
	I0913 23:27:20.460220   13355 main.go:141] libmachine: (addons-473197)     <boot dev='cdrom'/>
	I0913 23:27:20.460226   13355 main.go:141] libmachine: (addons-473197)     <boot dev='hd'/>
	I0913 23:27:20.460238   13355 main.go:141] libmachine: (addons-473197)     <bootmenu enable='no'/>
	I0913 23:27:20.460250   13355 main.go:141] libmachine: (addons-473197)   </os>
	I0913 23:27:20.460265   13355 main.go:141] libmachine: (addons-473197)   <devices>
	I0913 23:27:20.460282   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='cdrom'>
	I0913 23:27:20.460301   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/boot2docker.iso'/>
	I0913 23:27:20.460326   13355 main.go:141] libmachine: (addons-473197)       <target dev='hdc' bus='scsi'/>
	I0913 23:27:20.460339   13355 main.go:141] libmachine: (addons-473197)       <readonly/>
	I0913 23:27:20.460345   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460351   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='disk'>
	I0913 23:27:20.460361   13355 main.go:141] libmachine: (addons-473197)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:27:20.460368   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk'/>
	I0913 23:27:20.460375   13355 main.go:141] libmachine: (addons-473197)       <target dev='hda' bus='virtio'/>
	I0913 23:27:20.460379   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460385   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460390   13355 main.go:141] libmachine: (addons-473197)       <source network='mk-addons-473197'/>
	I0913 23:27:20.460397   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460401   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460408   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460413   13355 main.go:141] libmachine: (addons-473197)       <source network='default'/>
	I0913 23:27:20.460419   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460424   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460430   13355 main.go:141] libmachine: (addons-473197)     <serial type='pty'>
	I0913 23:27:20.460446   13355 main.go:141] libmachine: (addons-473197)       <target port='0'/>
	I0913 23:27:20.460463   13355 main.go:141] libmachine: (addons-473197)     </serial>
	I0913 23:27:20.460475   13355 main.go:141] libmachine: (addons-473197)     <console type='pty'>
	I0913 23:27:20.460492   13355 main.go:141] libmachine: (addons-473197)       <target type='serial' port='0'/>
	I0913 23:27:20.460504   13355 main.go:141] libmachine: (addons-473197)     </console>
	I0913 23:27:20.460514   13355 main.go:141] libmachine: (addons-473197)     <rng model='virtio'>
	I0913 23:27:20.460527   13355 main.go:141] libmachine: (addons-473197)       <backend model='random'>/dev/random</backend>
	I0913 23:27:20.460540   13355 main.go:141] libmachine: (addons-473197)     </rng>
	I0913 23:27:20.460548   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460554   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460564   13355 main.go:141] libmachine: (addons-473197)   </devices>
	I0913 23:27:20.460574   13355 main.go:141] libmachine: (addons-473197) </domain>
	I0913 23:27:20.460592   13355 main.go:141] libmachine: (addons-473197) 
	I0913 23:27:20.466244   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:75:c0:ca in network default
	I0913 23:27:20.467639   13355 main.go:141] libmachine: (addons-473197) Ensuring networks are active...
	I0913 23:27:20.467669   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:20.468356   13355 main.go:141] libmachine: (addons-473197) Ensuring network default is active
	I0913 23:27:20.468605   13355 main.go:141] libmachine: (addons-473197) Ensuring network mk-addons-473197 is active
	I0913 23:27:20.469014   13355 main.go:141] libmachine: (addons-473197) Getting domain xml...
	I0913 23:27:20.469710   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:21.903658   13355 main.go:141] libmachine: (addons-473197) Waiting to get IP...
	I0913 23:27:21.904363   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:21.904874   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:21.904902   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:21.904817   13377 retry.go:31] will retry after 304.697765ms: waiting for machine to come up
	I0913 23:27:22.211392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.211878   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.211895   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.211847   13377 retry.go:31] will retry after 296.206544ms: waiting for machine to come up
	I0913 23:27:22.509388   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.510038   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.510074   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.509984   13377 retry.go:31] will retry after 351.816954ms: waiting for machine to come up
	I0913 23:27:22.863507   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.863981   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.864012   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.863920   13377 retry.go:31] will retry after 530.240488ms: waiting for machine to come up
	I0913 23:27:23.395630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.396082   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.396145   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.396069   13377 retry.go:31] will retry after 548.533639ms: waiting for machine to come up
	I0913 23:27:23.945981   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.946426   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.946449   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.946390   13377 retry.go:31] will retry after 804.440442ms: waiting for machine to come up
	I0913 23:27:24.752386   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:24.752879   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:24.752901   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:24.752819   13377 retry.go:31] will retry after 784.165086ms: waiting for machine to come up
	I0913 23:27:25.538164   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:25.538541   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:25.538565   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:25.538498   13377 retry.go:31] will retry after 1.081622308s: waiting for machine to come up
	I0913 23:27:26.621460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:26.621931   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:26.621955   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:26.621857   13377 retry.go:31] will retry after 1.731303856s: waiting for machine to come up
	I0913 23:27:28.354521   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:28.355071   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:28.355099   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:28.355009   13377 retry.go:31] will retry after 1.496214945s: waiting for machine to come up
	I0913 23:27:29.852809   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:29.853265   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:29.853301   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:29.853227   13377 retry.go:31] will retry after 2.460158583s: waiting for machine to come up
	I0913 23:27:32.316929   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:32.317410   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:32.317431   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:32.317373   13377 retry.go:31] will retry after 3.034476235s: waiting for machine to come up
	I0913 23:27:35.353176   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:35.353654   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:35.353699   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:35.353589   13377 retry.go:31] will retry after 4.290331524s: waiting for machine to come up
	I0913 23:27:39.649352   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650002   13355 main.go:141] libmachine: (addons-473197) Found IP for machine: 192.168.39.50
	I0913 23:27:39.650019   13355 main.go:141] libmachine: (addons-473197) Reserving static IP address...
	I0913 23:27:39.650027   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has current primary IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650461   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find host DHCP lease matching {name: "addons-473197", mac: "52:54:00:2d:a5:2e", ip: "192.168.39.50"} in network mk-addons-473197
	I0913 23:27:39.721216   13355 main.go:141] libmachine: (addons-473197) DBG | Getting to WaitForSSH function...
	I0913 23:27:39.721243   13355 main.go:141] libmachine: (addons-473197) Reserved static IP address: 192.168.39.50
	I0913 23:27:39.721278   13355 main.go:141] libmachine: (addons-473197) Waiting for SSH to be available...
	I0913 23:27:39.723998   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724611   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.724638   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724950   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH client type: external
	I0913 23:27:39.724977   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa (-rw-------)
	I0913 23:27:39.725008   13355 main.go:141] libmachine: (addons-473197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:27:39.725021   13355 main.go:141] libmachine: (addons-473197) DBG | About to run SSH command:
	I0913 23:27:39.725036   13355 main.go:141] libmachine: (addons-473197) DBG | exit 0
	I0913 23:27:39.855960   13355 main.go:141] libmachine: (addons-473197) DBG | SSH cmd err, output: <nil>: 
	I0913 23:27:39.856254   13355 main.go:141] libmachine: (addons-473197) KVM machine creation complete!
	I0913 23:27:39.856646   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:39.857244   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857451   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857626   13355 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:27:39.857643   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:27:39.858795   13355 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:27:39.858808   13355 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:27:39.858813   13355 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:27:39.858832   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.861250   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861689   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.861723   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861906   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.862060   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862212   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862395   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.862569   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.862742   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.862751   13355 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:27:39.967145   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:39.967169   13355 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:27:39.967179   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.969704   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970052   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.970076   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970268   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.970477   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970645   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970782   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.970951   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.971103   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.971115   13355 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:27:40.076316   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:27:40.076451   13355 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:27:40.076469   13355 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:27:40.076484   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076736   13355 buildroot.go:166] provisioning hostname "addons-473197"
	I0913 23:27:40.076759   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076929   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.079647   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080051   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.080075   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080207   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.080376   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080576   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080715   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.080902   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.081066   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.081078   13355 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-473197 && echo "addons-473197" | sudo tee /etc/hostname
	I0913 23:27:40.201203   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-473197
	
	I0913 23:27:40.201232   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.203941   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204266   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.204295   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204445   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.204612   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204717   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204938   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.205096   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.205257   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.205288   13355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-473197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-473197/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-473197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:27:40.315830   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:40.315864   13355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:27:40.315886   13355 buildroot.go:174] setting up certificates
	I0913 23:27:40.315900   13355 provision.go:84] configureAuth start
	I0913 23:27:40.315916   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.316174   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:40.318560   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.318909   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.318938   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.319047   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.320812   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321063   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.321089   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321172   13355 provision.go:143] copyHostCerts
	I0913 23:27:40.321244   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:27:40.321370   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:27:40.321425   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:27:40.321473   13355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.addons-473197 san=[127.0.0.1 192.168.39.50 addons-473197 localhost minikube]
	I0913 23:27:40.603148   13355 provision.go:177] copyRemoteCerts
	I0913 23:27:40.603210   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:27:40.603234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.606258   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.606705   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.606739   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.607033   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.607251   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.607362   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.607463   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:40.689713   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:27:40.712453   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:27:40.735387   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:27:40.757966   13355 provision.go:87] duration metric: took 442.049406ms to configureAuth
	I0913 23:27:40.758001   13355 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:27:40.758169   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:27:40.758238   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.760689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761096   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.761116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761352   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.761591   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761778   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.762072   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.762249   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.762265   13355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:27:40.978781   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:27:40.978810   13355 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:27:40.978820   13355 main.go:141] libmachine: (addons-473197) Calling .GetURL
	I0913 23:27:40.980184   13355 main.go:141] libmachine: (addons-473197) DBG | Using libvirt version 6000000
	I0913 23:27:40.982058   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982375   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.982407   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982552   13355 main.go:141] libmachine: Docker is up and running!
	I0913 23:27:40.982564   13355 main.go:141] libmachine: Reticulating splines...
	I0913 23:27:40.982573   13355 client.go:171] duration metric: took 21.165531853s to LocalClient.Create
	I0913 23:27:40.982600   13355 start.go:167] duration metric: took 21.165604233s to libmachine.API.Create "addons-473197"
	I0913 23:27:40.982612   13355 start.go:293] postStartSetup for "addons-473197" (driver="kvm2")
	I0913 23:27:40.982626   13355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:40.982643   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:40.982883   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:40.982909   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.985049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985372   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.985397   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985529   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.985759   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.985932   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.986038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.069472   13355 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:27:41.073428   13355 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:27:41.073453   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:27:41.073517   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:27:41.073538   13355 start.go:296] duration metric: took 90.917797ms for postStartSetup
	I0913 23:27:41.073579   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:41.074107   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.077174   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.077818   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.077852   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.078209   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:41.078430   13355 start.go:128] duration metric: took 21.280308685s to createHost
	I0913 23:27:41.078523   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.080871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081492   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.081509   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081740   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.081948   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082106   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082226   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.082357   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:41.082590   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:41.082607   13355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:27:41.188427   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726270061.160461194
	
	I0913 23:27:41.188463   13355 fix.go:216] guest clock: 1726270061.160461194
	I0913 23:27:41.188474   13355 fix.go:229] Guest: 2024-09-13 23:27:41.160461194 +0000 UTC Remote: 2024-09-13 23:27:41.078444881 +0000 UTC m=+21.385670707 (delta=82.016313ms)
	I0913 23:27:41.188531   13355 fix.go:200] guest clock delta is within tolerance: 82.016313ms
	I0913 23:27:41.188539   13355 start.go:83] releasing machines lock for "addons-473197", held for 21.390482943s
	I0913 23:27:41.188568   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.188834   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.191630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192076   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.192098   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192320   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192816   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192990   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.193060   13355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:27:41.193115   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.193231   13355 ssh_runner.go:195] Run: cat /version.json
	I0913 23:27:41.193263   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.195906   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196214   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196366   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196541   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196670   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196705   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196706   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.196834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196880   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197034   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.197031   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.197160   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197329   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.272290   13355 ssh_runner.go:195] Run: systemctl --version
	I0913 23:27:41.309754   13355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:27:41.465120   13355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:27:41.470808   13355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:27:41.470872   13355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:41.486194   13355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:27:41.486219   13355 start.go:495] detecting cgroup driver to use...
	I0913 23:27:41.486277   13355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:27:41.501356   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:27:41.514148   13355 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:27:41.514201   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:27:41.526902   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:27:41.539813   13355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:27:41.653998   13355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:27:41.795256   13355 docker.go:233] disabling docker service ...
	I0913 23:27:41.795338   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:27:41.808732   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:27:41.820663   13355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:27:41.960800   13355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:27:42.071315   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:27:42.085863   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:42.104721   13355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:27:42.104778   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.115928   13355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:27:42.116006   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.126630   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.136692   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.146840   13355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:42.158680   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.169310   13355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.187197   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.197346   13355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:42.206456   13355 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:27:42.206517   13355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:27:42.218600   13355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:42.228617   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:42.336875   13355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:27:42.432370   13355 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:27:42.432459   13355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:27:42.436970   13355 start.go:563] Will wait 60s for crictl version
	I0913 23:27:42.437040   13355 ssh_runner.go:195] Run: which crictl
	I0913 23:27:42.440590   13355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:27:42.475674   13355 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:27:42.475820   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.501858   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.529367   13355 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:27:42.530946   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:42.533556   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.533907   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:42.533934   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.534104   13355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:42.537936   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:42.549881   13355 kubeadm.go:883] updating cluster {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:42.549978   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:42.550015   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:42.581270   13355 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 23:27:42.581333   13355 ssh_runner.go:195] Run: which lz4
	I0913 23:27:42.584936   13355 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 23:27:42.588777   13355 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 23:27:42.588812   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 23:27:43.814973   13355 crio.go:462] duration metric: took 1.230077023s to copy over tarball
	I0913 23:27:43.815032   13355 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 23:27:45.932346   13355 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117279223s)
	I0913 23:27:45.932374   13355 crio.go:469] duration metric: took 2.117376082s to extract the tarball
	I0913 23:27:45.932383   13355 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 23:27:45.968777   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:46.009560   13355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 23:27:46.009591   13355 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:27:46.009602   13355 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I0913 23:27:46.009706   13355 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-473197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:27:46.009801   13355 ssh_runner.go:195] Run: crio config
	I0913 23:27:46.058212   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:46.058233   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:46.058242   13355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:46.058265   13355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-473197 NodeName:addons-473197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:46.058390   13355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-473197"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:46.058449   13355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:46.067747   13355 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:27:46.067836   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:46.076323   13355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:27:46.091845   13355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:46.107011   13355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 23:27:46.122091   13355 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:46.125699   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:46.136584   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:46.243887   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:46.259537   13355 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197 for IP: 192.168.39.50
	I0913 23:27:46.259566   13355 certs.go:194] generating shared ca certs ...
	I0913 23:27:46.259587   13355 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.259827   13355 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:27:46.322225   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt ...
	I0913 23:27:46.322258   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt: {Name:mke46b90c0d6e2a0d22a599cb0925a94af7cb890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322470   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key ...
	I0913 23:27:46.322490   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key: {Name:mkeed16d615b1d7b45fa5c87fb359fe1941c704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322591   13355 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:27:46.462878   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt ...
	I0913 23:27:46.462907   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt: {Name:mk6b1da2351e5a548bbce01c78eb8ec03bbc9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463051   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key ...
	I0913 23:27:46.463061   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key: {Name:mk7ea15f150fb9588b92c5379cfdb24690c332b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463123   13355 certs.go:256] generating profile certs ...
	I0913 23:27:46.463171   13355 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key
	I0913 23:27:46.463184   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt with IP's: []
	I0913 23:27:46.657652   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt ...
	I0913 23:27:46.657686   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: {Name:mk5f50c2130cbf6a4ae973b8a645d8dcfcea5e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657857   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key ...
	I0913 23:27:46.657870   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key: {Name:mk3ec218d1db7592ee3144e8458afc6e59c3670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657934   13355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74
	I0913 23:27:46.657951   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0913 23:27:46.879416   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 ...
	I0913 23:27:46.879453   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74: {Name:mkcaab583500a609e501e4f9e7f67d24dbf8d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879638   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 ...
	I0913 23:27:46.879651   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74: {Name:mk892a816842ba211b137a4d62befccce1e5b073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879724   13355 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt
	I0913 23:27:46.879814   13355 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key
	I0913 23:27:46.879862   13355 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key
	I0913 23:27:46.879879   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt with IP's: []
	I0913 23:27:46.991498   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt ...
	I0913 23:27:46.991530   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt: {Name:mkb643e56ac833ce28178330ec7aa1dda3e56b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991685   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key ...
	I0913 23:27:46.991696   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key: {Name:mka2351863ee87552b80a1470ad4d30098e9cd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991874   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:27:46.991908   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:27:46.991933   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:46.991956   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:27:46.992518   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:47.019880   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:27:47.046183   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:47.074948   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:27:47.097532   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 23:27:47.121957   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:27:47.146163   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:47.170775   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 23:27:47.194281   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:47.217329   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:47.233678   13355 ssh_runner.go:195] Run: openssl version
	I0913 23:27:47.239354   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:47.249994   13355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254467   13355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254522   13355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.260224   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:47.270703   13355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:47.274594   13355 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:47.274645   13355 kubeadm.go:392] StartCluster: {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:47.274712   13355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 23:27:47.274753   13355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 23:27:47.309951   13355 cri.go:89] found id: ""
	I0913 23:27:47.310012   13355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:47.320386   13355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:47.330943   13355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:47.341759   13355 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:47.341780   13355 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:47.341834   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:47.351646   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:47.351717   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:47.361297   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:47.370696   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:47.370762   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:47.380638   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.389574   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:47.389643   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.398896   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:47.408606   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:47.408676   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:47.418572   13355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:27:47.479386   13355 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:47.479472   13355 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:47.586391   13355 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:47.586505   13355 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:47.586582   13355 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:47.595987   13355 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:47.760778   13355 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:47.760900   13355 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:47.760974   13355 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:47.761064   13355 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:47.820089   13355 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:47.938680   13355 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:48.078014   13355 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:48.155692   13355 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:48.155847   13355 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.397795   13355 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:48.397964   13355 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.511295   13355 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:48.569260   13355 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:48.662216   13355 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:48.662475   13355 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:48.761318   13355 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:49.204225   13355 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:49.285052   13355 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:49.530932   13355 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:49.596255   13355 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:49.596809   13355 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:49.599274   13355 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:49.601179   13355 out.go:235]   - Booting up control plane ...
	I0913 23:27:49.601276   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:49.601348   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:49.601425   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:49.616053   13355 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:49.622415   13355 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:49.622489   13355 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:49.742292   13355 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:49.742405   13355 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:50.257638   13355 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 516.207513ms
	I0913 23:27:50.257765   13355 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:56.254726   13355 kubeadm.go:310] [api-check] The API server is healthy after 6.001344082s
	I0913 23:27:56.266993   13355 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:56.292355   13355 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:56.323160   13355 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:56.323401   13355 kubeadm.go:310] [mark-control-plane] Marking the node addons-473197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:56.339238   13355 kubeadm.go:310] [bootstrap-token] Using token: 39ittl.8h26ubvfwyg116f4
	I0913 23:27:56.340707   13355 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:56.340853   13355 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:56.349574   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:56.357917   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:56.365875   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:56.370732   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:56.375167   13355 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:56.666388   13355 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:57.109792   13355 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:57.661157   13355 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:57.662033   13355 kubeadm.go:310] 
	I0913 23:27:57.662163   13355 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:57.662184   13355 kubeadm.go:310] 
	I0913 23:27:57.662303   13355 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:57.662326   13355 kubeadm.go:310] 
	I0913 23:27:57.662361   13355 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:57.662417   13355 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:57.662496   13355 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:57.662508   13355 kubeadm.go:310] 
	I0913 23:27:57.662586   13355 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:57.662598   13355 kubeadm.go:310] 
	I0913 23:27:57.662671   13355 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:57.662687   13355 kubeadm.go:310] 
	I0913 23:27:57.662760   13355 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:57.662855   13355 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:57.662958   13355 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:57.662976   13355 kubeadm.go:310] 
	I0913 23:27:57.663089   13355 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:57.663197   13355 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:57.663210   13355 kubeadm.go:310] 
	I0913 23:27:57.663318   13355 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663464   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0913 23:27:57.663493   13355 kubeadm.go:310] 	--control-plane 
	I0913 23:27:57.663502   13355 kubeadm.go:310] 
	I0913 23:27:57.663615   13355 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:57.663626   13355 kubeadm.go:310] 
	I0913 23:27:57.663737   13355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663903   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0913 23:27:57.665427   13355 kubeadm.go:310] W0913 23:27:47.456712     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665734   13355 kubeadm.go:310] W0913 23:27:47.457675     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665846   13355 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:57.665879   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:57.665892   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:57.667738   13355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:57.668898   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:57.681342   13355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:57.704842   13355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:57.704978   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:57.705001   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-473197 minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-473197 minikube.k8s.io/primary=true
	I0913 23:27:57.725824   13355 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:57.846283   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.347074   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.847401   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.346340   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.846585   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.346364   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.846560   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.347311   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.847237   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.346723   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.425605   13355 kubeadm.go:1113] duration metric: took 4.720714541s to wait for elevateKubeSystemPrivileges
	I0913 23:28:02.425645   13355 kubeadm.go:394] duration metric: took 15.151004151s to StartCluster
	I0913 23:28:02.425662   13355 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.425785   13355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:28:02.426125   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.426288   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:28:02.426308   13355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:28:02.426365   13355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:28:02.426474   13355 addons.go:69] Setting yakd=true in profile "addons-473197"
	I0913 23:28:02.426504   13355 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-473197"
	I0913 23:28:02.426508   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.426517   13355 addons.go:234] Setting addon yakd=true in "addons-473197"
	I0913 23:28:02.426521   13355 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-473197"
	I0913 23:28:02.426514   13355 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-473197"
	I0913 23:28:02.426549   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426556   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426559   13355 addons.go:69] Setting helm-tiller=true in profile "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon helm-tiller=true in "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:02.426596   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426597   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426602   13355 addons.go:69] Setting ingress=true in profile "addons-473197"
	I0913 23:28:02.426631   13355 addons.go:234] Setting addon ingress=true in "addons-473197"
	I0913 23:28:02.426669   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426477   13355 addons.go:69] Setting gcp-auth=true in profile "addons-473197"
	I0913 23:28:02.426731   13355 mustload.go:65] Loading cluster: addons-473197
	I0913 23:28:02.426862   13355 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-473197"
	I0913 23:28:02.426884   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-473197"
	I0913 23:28:02.426885   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.427037   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427060   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427061   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427085   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427129   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427152   13355 addons.go:69] Setting metrics-server=true in profile "addons-473197"
	I0913 23:28:02.427165   13355 addons.go:234] Setting addon metrics-server=true in "addons-473197"
	I0913 23:28:02.426553   13355 addons.go:69] Setting ingress-dns=true in profile "addons-473197"
	I0913 23:28:02.427179   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427190   13355 addons.go:69] Setting volcano=true in profile "addons-473197"
	I0913 23:28:02.427201   13355 addons.go:234] Setting addon volcano=true in "addons-473197"
	I0913 23:28:02.427211   13355 addons.go:69] Setting registry=true in profile "addons-473197"
	I0913 23:28:02.427221   13355 addons.go:234] Setting addon registry=true in "addons-473197"
	I0913 23:28:02.427222   13355 addons.go:69] Setting storage-provisioner=true in profile "addons-473197"
	I0913 23:28:02.427145   13355 addons.go:69] Setting inspektor-gadget=true in profile "addons-473197"
	I0913 23:28:02.427230   13355 addons.go:69] Setting volumesnapshots=true in profile "addons-473197"
	I0913 23:28:02.427235   13355 addons.go:234] Setting addon storage-provisioner=true in "addons-473197"
	I0913 23:28:02.427239   13355 addons.go:234] Setting addon volumesnapshots=true in "addons-473197"
	I0913 23:28:02.427241   13355 addons.go:234] Setting addon inspektor-gadget=true in "addons-473197"
	I0913 23:28:02.427179   13355 addons.go:234] Setting addon ingress-dns=true in "addons-473197"
	I0913 23:28:02.426496   13355 addons.go:69] Setting cloud-spanner=true in profile "addons-473197"
	I0913 23:28:02.427256   13355 addons.go:234] Setting addon cloud-spanner=true in "addons-473197"
	I0913 23:28:02.426488   13355 addons.go:69] Setting default-storageclass=true in profile "addons-473197"
	I0913 23:28:02.427269   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-473197"
	I0913 23:28:02.427330   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427431   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427455   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427463   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427473   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427490   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427456   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427570   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427595   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427628   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427709   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427731   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427821   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427840   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427846   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427873   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427975   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427998   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428068   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428114   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428139   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428165   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428185   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428215   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428298   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428659   13355 out.go:177] * Verifying Kubernetes components...
	I0913 23:28:02.430798   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:28:02.443895   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0913 23:28:02.447836   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0913 23:28:02.460485   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0913 23:28:02.460874   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.460923   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.462824   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.462871   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475879   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.475930   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475955   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476088   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476184   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476509   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476527   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476781   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476799   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476841   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476853   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476873   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477429   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.477470   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.477700   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477702   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.478295   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478318   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478339   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.478341   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.490076   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I0913 23:28:02.490970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.491812   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.491835   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.492304   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.492616   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I0913 23:28:02.493134   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.493820   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.493836   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.494007   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.494989   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0913 23:28:02.497252   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.498698   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.498751   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.499326   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.501553   13355 addons.go:234] Setting addon default-storageclass=true in "addons-473197"
	I0913 23:28:02.501602   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.501968   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.502005   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.502350   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.502365   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.502909   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.503277   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.506570   13355 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-473197"
	I0913 23:28:02.506622   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.507002   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.507046   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.514594   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0913 23:28:02.514782   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0913 23:28:02.515367   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.516584   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.516605   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.517040   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.517686   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.517727   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.518257   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.518363   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0913 23:28:02.519018   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.519037   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.519440   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.519657   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.522038   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0913 23:28:02.522040   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.522394   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.522435   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.522727   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.522970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.523316   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523334   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523520   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523532   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523938   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.524557   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.524603   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.525745   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0913 23:28:02.526425   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.526729   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0913 23:28:02.527167   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.527412   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.527429   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.527767   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.527985   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.528007   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.529141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.529182   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.529468   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.529539   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.530024   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.530070   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.531195   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0913 23:28:02.531769   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0913 23:28:02.532339   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.532382   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.532396   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.532869   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.532894   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.533439   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.533682   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0913 23:28:02.534092   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0913 23:28:02.539974   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0913 23:28:02.540404   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.541583   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.541602   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.541629   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0913 23:28:02.541958   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.542365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.544696   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.546879   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0913 23:28:02.547431   13355 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:28:02.548325   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0913 23:28:02.549791   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:28:02.549808   13355 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:28:02.549834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.552042   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0913 23:28:02.564110   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0913 23:28:02.564127   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.564132   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0913 23:28:02.564116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564213   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.564232   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564116   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0913 23:28:02.564383   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0913 23:28:02.564467   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564922   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.564951   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564962   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.565052   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.565067   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.565129   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565819   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.565933   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565964   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566027   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.566035   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566045   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.566054   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.566091   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566112   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566136   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566148   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566172   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0913 23:28:02.567152   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567167   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567256   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567262   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567340   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567349   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.567474   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567480   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567531   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567546   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567556   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.567609   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.567654   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567664   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567713   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567738   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567747   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567749   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567798   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567823   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567915   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567929   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.568085   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568102   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.568148   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568172   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568188   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568215   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568340   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568402   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568439   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.568464   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568519   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568665   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568699   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569269   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.569415   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.569426   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.569482   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.569514   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569816   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.570420   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.570455   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.570772   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.571101   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.571153   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.571923   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.571964   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.571931   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572189   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.572204   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.572331   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572395   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.572403   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573462   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573488   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573510   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.573528   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.574423   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:02.574434   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.574441   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573549   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.575002   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.575025   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.575042   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.577246   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:02.577336   13355 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 23:28:02.577577   13355 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0913 23:28:02.577709   13355 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:28:02.578460   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.578635   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:28:02.578647   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0913 23:28:02.578665   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579318   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.579608   13355 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:02.579851   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:28:02.579873   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579633   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:28:02.580938   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.580994   13355 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:28:02.582107   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:28:02.582192   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:28:02.582204   13355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:28:02.582234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.583277   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.583707   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.583726   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.584022   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.584195   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.584220   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 23:28:02.584401   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.584539   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.584948   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.585611   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:02.585633   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 23:28:02.585650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.585705   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:28:02.585820   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.585840   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586103   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.586338   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.586392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586488   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.586648   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.586902   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.586918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.587228   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.587417   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.587574   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.587716   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.588599   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:28:02.589562   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.589986   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.590012   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.590304   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.590503   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.590650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.590784   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.591169   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:28:02.592433   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:28:02.592982   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0913 23:28:02.593391   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.593880   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.593904   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.594231   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.594357   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:28:02.594365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.594869   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I0913 23:28:02.595307   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.595835   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.595857   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.596170   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0913 23:28:02.596327   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:28:02.596346   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.596551   13355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:02.596571   13355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:28:02.596587   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.596642   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.597297   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.597523   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.597839   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:28:02.597858   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:28:02.597882   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.598022   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.598046   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.598359   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.598480   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.600521   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.601746   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.601897   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602020   13355 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:28:02.602261   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602280   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602309   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602332   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602578   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602638   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602773   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602928   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.602925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.603038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.603319   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.603369   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.604203   13355 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:28:02.605136   13355 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:28:02.605271   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:02.605291   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:28:02.605308   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.605737   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0913 23:28:02.606098   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.606452   13355 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:02.606469   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:28:02.606483   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.606619   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.606637   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.606672   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0913 23:28:02.607023   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.607041   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.607206   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.607447   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.607462   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.608306   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.608506   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.609969   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610193   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610631   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.610650   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610800   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610936   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.611209   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.611513   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.611607   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.611708   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:28:02.611881   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.612359   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612853   13355 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 23:28:02.612890   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.612853   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:28:02.612935   13355 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:28:02.612955   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.613679   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.614172   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.614301   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.614358   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:02.614375   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 23:28:02.614391   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.616797   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617557   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617585   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617630   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.617685   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617710   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0913 23:28:02.617846   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.617859   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.618067   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.618131   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618188   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.618375   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.618533   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.618780   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618907   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.618920   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.619116   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.619427   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.619639   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.621163   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.623020   13355 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:28:02.624487   13355 out.go:177]   - Using image docker.io/registry:2.8.3
	W0913 23:28:02.625390   13355 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625420   13355 retry.go:31] will retry after 203.721913ms: ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625979   13355 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:28:02.625996   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:28:02.626020   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.626338   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0913 23:28:02.626915   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.628248   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.628278   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.628731   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.628951   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.629689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630408   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0913 23:28:02.630603   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.630607   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.630644   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630752   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.630897   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.630954   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.631042   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.631079   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.631402   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.631424   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.631759   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.632088   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.632893   13355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:28:02.633647   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.634372   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:02.634400   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:28:02.634419   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.634983   13355 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:28:02.635926   13355 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:28:02.635944   13355 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:28:02.635961   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.637913   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638299   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.638327   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638429   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638456   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.638653   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.638829   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639005   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.638906   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.639049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.639088   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.639199   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.639371   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639577   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:03.010874   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:03.011331   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:03.027301   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:28:03.027323   13355 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:28:03.067510   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:28:03.067570   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:28:03.088658   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:03.092881   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:03.096079   13355 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:28:03.096109   13355 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:28:03.118568   13355 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:28:03.118604   13355 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:28:03.151579   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:28:03.151606   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:28:03.163844   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:03.171501   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:28:03.171531   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0913 23:28:03.174545   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:28:03.174571   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:28:03.212903   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:03.223453   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:03.228572   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:28:03.228604   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:28:03.250777   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:28:03.250803   13355 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:28:03.279463   13355 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:28:03.279488   13355 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:28:03.302426   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:28:03.302459   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:28:03.319332   13355 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.319353   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:28:03.330057   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.330085   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0913 23:28:03.407024   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:28:03.407056   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:28:03.440023   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:28:03.440055   13355 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:28:03.479290   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:28:03.479317   13355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:28:03.491399   13355 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:28:03.491426   13355 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:28:03.520500   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.531329   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:28:03.531360   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:28:03.560362   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.703012   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.703042   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:28:03.713271   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:28:03.713301   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:28:03.714632   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.714653   13355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:28:03.719658   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:28:03.719678   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:28:03.737269   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:28:03.737304   13355 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:28:03.889071   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.908115   13355 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:03.908155   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:28:03.918960   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.941219   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:28:03.941249   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:28:03.994232   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:28:03.994259   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:28:04.229209   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:04.267554   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:28:04.267577   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:28:04.330516   13355 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:28:04.330552   13355 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:28:04.536905   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:28:04.536936   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:28:04.590128   13355 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.590152   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:28:04.788803   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.816897   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:28:04.816931   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:28:05.234442   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:28:05.234478   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:28:05.583587   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:28:05.583614   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:28:05.923679   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:05.923710   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:28:06.123490   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.112567467s)
	I0913 23:28:06.123547   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123557   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.123855   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.123869   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.123883   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123892   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.124216   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.124238   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.363736   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:07.633977   13355 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.566365758s)
	I0913 23:28:07.634011   13355 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 23:28:07.634023   13355 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.566476402s)
	I0913 23:28:07.634039   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.622680865s)
	I0913 23:28:07.634089   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634105   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634380   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634436   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.634448   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634455   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634784   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634856   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634890   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.635047   13355 node_ready.go:35] waiting up to 6m0s for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650081   13355 node_ready.go:49] node "addons-473197" has status "Ready":"True"
	I0913 23:28:07.650107   13355 node_ready.go:38] duration metric: took 15.042078ms for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650117   13355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:07.696618   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:07.988840   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.900143589s)
	I0913 23:28:07.988889   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988902   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988909   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.895998713s)
	I0913 23:28:07.988947   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988962   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988991   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.825104432s)
	I0913 23:28:07.989064   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.776127396s)
	I0913 23:28:07.989142   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989163   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989177   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989178   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989192   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989202   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989069   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989500   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989274   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989532   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989541   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989547   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989777   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989817   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989833   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989842   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989843   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989850   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989854   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989856   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989864   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989280   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990285   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990340   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990363   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990372   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989408   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990392   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.990409   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990434   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990442   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989433   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.992583   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.992598   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.078646   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.078674   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.079091   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.079153   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.079168   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:08.079276   13355 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0913 23:28:08.086087   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.086136   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.086492   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.086562   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.086620   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.150438   13355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-473197" context rescaled to 1 replicas
	I0913 23:28:08.748384   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.748408   13355 pod_ready.go:82] duration metric: took 1.05175792s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.748418   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799453   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.799484   13355 pod_ready.go:82] duration metric: took 51.058777ms for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799510   13355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874578   13355 pod_ready.go:93] pod "etcd-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.874605   13355 pod_ready.go:82] duration metric: took 75.087265ms for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874616   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.604747   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:28:09.604789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:09.608703   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609227   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:09.609263   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609479   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:09.609669   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:09.609849   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:09.610002   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:09.882148   13355 pod_ready.go:93] pod "kube-apiserver-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.882180   13355 pod_ready.go:82] duration metric: took 1.007556164s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.882192   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894451   13355 pod_ready.go:93] pod "kube-controller-manager-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.894497   13355 pod_ready.go:82] duration metric: took 12.295374ms for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894514   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038855   13355 pod_ready.go:93] pod "kube-proxy-vg8p5" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.038887   13355 pod_ready.go:82] duration metric: took 144.362352ms for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038901   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.156523   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:28:10.274748   13355 addons.go:234] Setting addon gcp-auth=true in "addons-473197"
	I0913 23:28:10.274811   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:10.275129   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.275181   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.290032   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0913 23:28:10.290544   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.291078   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.291104   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.291475   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.292074   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.292121   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.306929   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0913 23:28:10.307597   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.308136   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.308165   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.308479   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.308653   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:10.310373   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:10.310613   13355 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:28:10.310635   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:10.313460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.313874   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:10.313918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.314081   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:10.314245   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:10.314388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:10.314538   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:10.441154   13355 pod_ready.go:93] pod "kube-scheduler-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.441189   13355 pod_ready.go:82] duration metric: took 402.279342ms for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.441203   13355 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:11.038273   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.814781844s)
	I0913 23:28:11.038325   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038338   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038351   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.517814291s)
	I0913 23:28:11.038392   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038411   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038417   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.478018749s)
	I0913 23:28:11.038450   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038462   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038481   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.149383482s)
	I0913 23:28:11.038503   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038527   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.119530303s)
	I0913 23:28:11.038556   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038571   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038518   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038634   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.809394559s)
	W0913 23:28:11.038660   13355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038679   13355 retry.go:31] will retry after 183.620302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038717   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.249875908s)
	I0913 23:28:11.038739   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038748   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038848   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038862   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038871   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038865   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038888   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038899   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038910   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038878   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039010   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039031   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039036   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039057   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039069   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039122   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039149   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039160   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039167   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039166   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039204   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039214   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039133   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039231   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039239   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039245   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039016   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039310   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039467   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039314   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039385   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039222   13355 addons.go:475] Verifying addon ingress=true in "addons-473197"
	I0913 23:28:11.039415   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.040400   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.040410   13355 addons.go:475] Verifying addon metrics-server=true in "addons-473197"
	I0913 23:28:11.041432   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.041448   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.041458   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.041473   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.041801   13355 out.go:177] * Verifying ingress addon...
	I0913 23:28:11.042190   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042207   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042216   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.042219   13355 addons.go:475] Verifying addon registry=true in "addons-473197"
	I0913 23:28:11.042423   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042430   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042439   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042443   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042448   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.043752   13355 out.go:177] * Verifying registry addon...
	I0913 23:28:11.043754   13355 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-473197 service yakd-dashboard -n yakd-dashboard
	
	I0913 23:28:11.044788   13355 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 23:28:11.046424   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:28:11.081206   13355 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:28:11.081236   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.081287   13355 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 23:28:11.081297   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.223004   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:11.561874   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.562467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.057966   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.058896   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.469345   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:12.561195   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.600083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.619662   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.255869457s)
	I0913 23:28:12.619725   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619738   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.619748   13355 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.309112473s)
	I0913 23:28:12.619902   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396587931s)
	I0913 23:28:12.619956   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619976   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620101   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620159   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620169   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620183   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620191   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620194   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620202   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620223   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620426   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620437   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620437   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620447   13355 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:12.620532   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620512   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620564   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.623355   13355 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:28:12.623358   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:12.625412   13355 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:28:12.626098   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:28:12.626980   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:28:12.627005   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:28:12.634155   13355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:28:12.634185   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.701404   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:28:12.701431   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:28:12.784012   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:12.784039   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:28:12.826052   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:13.050608   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.054294   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.131996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.549130   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.550698   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.654447   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.954168   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.128064006s)
	I0913 23:28:13.954227   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954246   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954502   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954524   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.954543   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954551   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954561   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954804   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954864   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954887   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.956609   13355 addons.go:475] Verifying addon gcp-auth=true in "addons-473197"
	I0913 23:28:13.958261   13355 out.go:177] * Verifying gcp-auth addon...
	I0913 23:28:13.960562   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:28:14.052223   13355 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:28:14.052254   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.137186   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.137455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.211253   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.466086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.550740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.552353   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.633397   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.950640   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:14.966723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.066865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.067365   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.131415   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.466378   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.549510   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.552396   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.632635   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.964956   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.054146   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.131263   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.464327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.549627   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.553008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.632296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.225129   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:17.225473   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.225716   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.226083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.226210   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.464982   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.550258   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.550361   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.630780   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.964491   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.049246   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.050330   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.131607   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.464703   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.549790   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.550896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.631297   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.965276   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.051294   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.131697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.447973   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:19.464571   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.550276   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.551651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.631103   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.964917   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.049683   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.050503   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.130574   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.464865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.550041   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.551487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.631097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.969748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.069252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.069792   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.132416   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.452205   13355 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:21.452227   13355 pod_ready.go:82] duration metric: took 11.011016466s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:21.452243   13355 pod_ready.go:39] duration metric: took 13.802114071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:21.452257   13355 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:21.452309   13355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:21.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.469459   13355 api_server.go:72] duration metric: took 19.043113394s to wait for apiserver process to appear ...
	I0913 23:28:21.469484   13355 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:21.469502   13355 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0913 23:28:21.474255   13355 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0913 23:28:21.475191   13355 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:21.475215   13355 api_server.go:131] duration metric: took 5.722944ms to wait for apiserver health ...
	I0913 23:28:21.475222   13355 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:21.482377   13355 system_pods.go:59] 18 kube-system pods found
	I0913 23:28:21.482406   13355 system_pods.go:61] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.482416   13355 system_pods.go:61] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.482422   13355 system_pods.go:61] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.482432   13355 system_pods.go:61] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.482439   13355 system_pods.go:61] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.482445   13355 system_pods.go:61] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.482450   13355 system_pods.go:61] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.482461   13355 system_pods.go:61] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.482472   13355 system_pods.go:61] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.482478   13355 system_pods.go:61] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.482484   13355 system_pods.go:61] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.482500   13355 system_pods.go:61] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.482510   13355 system_pods.go:61] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.482517   13355 system_pods.go:61] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.482524   13355 system_pods.go:61] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482532   13355 system_pods.go:61] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482537   13355 system_pods.go:61] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.482547   13355 system_pods.go:61] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.482560   13355 system_pods.go:74] duration metric: took 7.331476ms to wait for pod list to return data ...
	I0913 23:28:21.482573   13355 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:21.484999   13355 default_sa.go:45] found service account: "default"
	I0913 23:28:21.485018   13355 default_sa.go:55] duration metric: took 2.439792ms for default service account to be created ...
	I0913 23:28:21.485024   13355 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:21.492239   13355 system_pods.go:86] 18 kube-system pods found
	I0913 23:28:21.492270   13355 system_pods.go:89] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.492278   13355 system_pods.go:89] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.492304   13355 system_pods.go:89] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.492313   13355 system_pods.go:89] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.492317   13355 system_pods.go:89] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.492322   13355 system_pods.go:89] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.492326   13355 system_pods.go:89] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.492332   13355 system_pods.go:89] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.492336   13355 system_pods.go:89] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.492345   13355 system_pods.go:89] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.492354   13355 system_pods.go:89] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.492361   13355 system_pods.go:89] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.492367   13355 system_pods.go:89] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.492375   13355 system_pods.go:89] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.492382   13355 system_pods.go:89] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492387   13355 system_pods.go:89] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492391   13355 system_pods.go:89] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.492399   13355 system_pods.go:89] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.492407   13355 system_pods.go:126] duration metric: took 7.377814ms to wait for k8s-apps to be running ...
	I0913 23:28:21.492417   13355 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:21.492462   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:21.506589   13355 system_svc.go:56] duration metric: took 14.16145ms WaitForService to wait for kubelet
	I0913 23:28:21.506620   13355 kubeadm.go:582] duration metric: took 19.080279709s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:21.506641   13355 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:21.509697   13355 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:28:21.509728   13355 node_conditions.go:123] node cpu capacity is 2
	I0913 23:28:21.509740   13355 node_conditions.go:105] duration metric: took 3.093718ms to run NodePressure ...
	I0913 23:28:21.509750   13355 start.go:241] waiting for startup goroutines ...
	I0913 23:28:21.549269   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.549838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.630759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.964996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.066659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.066988   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.130457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.464269   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.550603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.551392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.631480   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.049834   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.050736   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.133507   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.464509   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.549382   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.552128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.631843   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.965613   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.049624   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.050338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.131212   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.464759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.549437   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.551097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.630910   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.964175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.048277   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.050045   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.131365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.977617   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.978628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.978709   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.979158   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.981429   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.049520   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.051681   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.130220   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.464159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.549552   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.551222   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.631176   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.050910   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.052011   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.132349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.464810   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.549257   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.550786   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.630897   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.964079   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.050122   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.050142   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.151036   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.464673   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.549691   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.549874   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.630545   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.963838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.049223   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.051589   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.131701   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.464227   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.549018   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.552460   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.631494   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.066437   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.066971   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.132136   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.464961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.549367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.550784   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.631748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.964913   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.051008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.051249   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.130779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.464391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.551575   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.552105   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.631630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.965632   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.101759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.101841   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.464572   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.549356   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.550906   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.633073   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.964216   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.048975   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.050916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.131112   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.463822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.549425   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.550516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.630393   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.964336   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.048857   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.050443   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.151118   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.465096   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.549740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.550620   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.631086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.966455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.049659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.050047   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.131495   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.465132   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.548766   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.550376   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.631577   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.964286   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.049062   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.050210   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.131543   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.464275   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.548452   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.550456   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.631360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.963688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.049820   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.050743   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.130637   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.464113   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.549304   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.550688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.631192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.963973   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.051608   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.051727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.133034   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.464549   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.559078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.559213   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.631291   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.050741   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.051159   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.131060   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.464822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.549844   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.550291   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.630944   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.965248   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.048824   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.050349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.131327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.464279   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.549628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.550481   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.630731   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.964314   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.048937   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.050618   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.130605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.464689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.549726   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.550735   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.630990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.964388   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.048950   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.050795   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.131078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.464031   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.550212   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.551605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.631901   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.965017   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.049775   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.050581   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.131657   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.464727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.550289   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.550580   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.630961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.965047   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.048962   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.050171   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.131175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.463892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.565475   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.565612   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.632466   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.049299   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.050431   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.134055   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.463841   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.550749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.550792   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.631218   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.964803   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.049789   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.050384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.131201   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.465262   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.554496   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.555890   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.631739   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.963850   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.049818   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.051135   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.134195   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.465246   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.549517   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.550721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.633663   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.964089   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.049632   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.050325   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.131567   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.466199   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.549697   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.550894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.632690   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.964192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.049080   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.131986   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.464641   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.552164   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.554375   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.631764   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.965086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.049392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.050669   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.131492   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.464328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.549524   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.550434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.631322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.964441   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.049783   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.055312   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.131190   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.464922   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.550169   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.550221   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.631339   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.964457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.049661   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.051864   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.132038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.582166   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.583770   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.584179   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.630661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.049046   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.131202   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.464541   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.549549   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.551453   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.630606   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.964993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.050779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.051367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.131038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.464444   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.549153   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.551452   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.848826   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.964836   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.050095   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.050302   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.131159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.464360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.564936   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.565447   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.666242   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.964847   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.049829   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.051453   13355 kapi.go:107] duration metric: took 45.005028778s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:56.131651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.464265   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.549020   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.630993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.964711   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.049527   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.132133   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.464568   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.550287   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.631088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.965832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.066601   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.131348   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.464693   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.551166   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.632041   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.965180   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.066338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.131515   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.463658   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.548973   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.630391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.964296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.049386   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.130469   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.463737   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.549776   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.717623   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.049274   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.131153   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:01.463888   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.549890   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.631219   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.255077   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.255610   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.255728   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.474419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.574193   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.630689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.964630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.049565   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.131380   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.464744   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.549449   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.630833   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.965101   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.048562   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.131484   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.466051   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.568692   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.668110   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.967488   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.049862   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.132252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.464896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.549994   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.630434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.964526   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.065548   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.166487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.464128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.549947   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.631713   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.963955   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.049715   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.130974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.550454   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.630666   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.967197   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.068388   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.168815   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.464599   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.550992   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.630627   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.966766   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.053073   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.130730   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.465025   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.567230   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.630516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.965721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.054440   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.130768   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:10.464306   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.548749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.631327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.276930   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.277860   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.279328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.471697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.582335   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.674829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.965501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.048830   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.130570   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.466419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.553795   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.631061   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.964723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.051802   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.129998   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.465020   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.566946   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.632019   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.969250   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.050082   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.130824   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.464827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.565739   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.629990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.974680   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.049645   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.130802   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.464723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.567052   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.631421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.964586   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.049406   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.130916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.465274   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.548963   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.630852   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.964129   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.048736   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.131304   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.465372   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.549339   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.631400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.964595   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.048825   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.130668   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.463994   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.550503   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.632529   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.978043   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.049954   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.131952   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.464512   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.551136   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.632160   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.964960   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.242123   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.242829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.465827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.550268   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.633322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.964413   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.049949   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.132854   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.671555   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.673400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.673957   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.050196   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.130368   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.464308   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.549420   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.630664   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.963895   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.049709   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.150900   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.464457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.548815   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.631125   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.976832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.078240   13355 kapi.go:107] duration metric: took 1m13.033450728s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 23:29:24.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:24.464968   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.118892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.121603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.131661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.464273   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.631894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.964763   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.130778   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.465365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.630404   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.963974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.131493   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.464501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.632858   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.963992   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.132535   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.464106   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.633421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.969206   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.132088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.466471   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.631809   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.966539   13355 kapi.go:107] duration metric: took 1m16.005977096s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:29:29.967938   13355 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-473197 cluster.
	I0913 23:29:29.969110   13355 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:29:29.970285   13355 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:29:30.131386   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:30.632192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:31.132279   13355 kapi.go:107] duration metric: took 1m18.506177888s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:29:31.134114   13355 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 23:29:31.135471   13355 addons.go:510] duration metric: took 1m28.709101641s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 23:29:31.135518   13355 start.go:246] waiting for cluster config update ...
	I0913 23:29:31.135543   13355 start.go:255] writing updated cluster config ...
	I0913 23:29:31.135825   13355 ssh_runner.go:195] Run: rm -f paused
	I0913 23:29:31.187868   13355 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:29:31.189865   13355 out.go:177] * Done! kubectl is now configured to use "addons-473197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.959210346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3620112-1839-44da-96fb-c8352df0a33e name=/runtime.v1.RuntimeService/Version
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.960380249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e03575f-4ce7-4c12-92c8-25cb296170ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.961645176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270726961614843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571195,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e03575f-4ce7-4c12-92c8-25cb296170ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.962411116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2107096d-3d78-4aef-8593-37fa1b063f9a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.962486373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2107096d-3d78-4aef-8593-37fa1b063f9a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.962898316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759
a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699e3247ecb4cdb0a6f9ae7ddffd8900b63881d61ed92e915107012d7c0ea5d5,PodSandboxId:08bf80b9d21ebc8567d982196133d52ad2b2d9979b2872c3ad5444c526efa542,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726270681208407739,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes
.pod.uid: 6d5b439c-a1f4-473b-a91a-7cbab80aace0,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b94376dceb0c4059b32a1934926ea14a0e5fe78befbc1750960c67c340eaa1,PodSandboxId:cb47c279354a1938dc15dcc032938f8a59fe2f21bd859bbb9ae48eccd5042791,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726270678705491934,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf
b3767d-477f-4e5a-a747-acb9162d74fc,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes
.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e,PodSandboxId:6e80ba24feda5f6544885a36aa72f11d5bf8ed598548224041926a68a8c03259,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726270163014285052,Labels:map[string]string{io.kubernetes.container.name: controller,i
o.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-pvkkz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9176c8-fc0c-4357-b26c-f7d80c3527af,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:creat
e,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,
Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd
8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b,PodSandboxId:6f515b34fc6fa04741c4222a812786ed4719d255e79737e40d165dad265996d7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726270108692926746,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3db76d21-1e5d-4ece-8925-c84d0df606bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097
bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]
string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2107096d-3d78-4aef-8593-37fa1b063f9a name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.967437861Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06756005-8b23-48d1-8a4d-541e711ce000 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.967789628Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&PodSandboxMetadata{Name:nginx,Uid:4959ba5e-162e-43f3-987e-5dc829126b9d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270721015202276,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:38:40.703827382Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&PodSandboxMetadata{Name:headlamp-57fb76fcdb-z5dzh,Uid:8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270705250818346,Labels:map[string]stri
ng{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,pod-template-hash: 57fb76fcdb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:38:24.938857322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c491894c608c7ecf378756c0f1c93e33daeaeaaba4c6a8dca77f1e848935a1ed,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270171765323424,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:29:31.456201815Z,kubernetes.io/config.source: api,},RuntimeHandler
:,},&PodSandbox{Id:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-74znl,Uid:8797038b-501c-49c8-b165-7c1454b6cd59,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270158177502287,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:13.960057596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e80ba24feda5f6544885a36aa72f11d5bf8ed598548224041926a68a8c03259,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-bc57996ff-pvkkz,Uid:6f9176c8-fc0c-4357-b26c-f7d80c3527af,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270155113968714,Labels:map[string]string{app.kubernetes.io/co
mponent: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-pvkkz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9176c8-fc0c-4357-b26c-f7d80c3527af,pod-template-hash: bc57996ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:10.833147979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-2rwbq,Uid:157685d1-cf53-409b-8a21-e77779bcbbd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270088886163117,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,k8s-app: metrics-server,pod-template
-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:08.569778945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-5c8rt,Uid:e1d5b7dd-422f-4d44-938e-f649701560ca,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270088348939593,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,pod-template-hash: 86d989889c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:07.993491701Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f515b34fc6fa04741c4222a812786ed4719d255e79737e40d165dad265996d7,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:3
db76d21-1e5d-4ece-8925-c84d0df606bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270088120708397,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3db76d21-1e5d-4ece-8925-c84d0df606bf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy
\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-09-13T23:28:07.391318705Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8268a064-fb82-447e-987d-931165d33b2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270088009232916,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.ku
bernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T23:28:07.636660787Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kx4xn,Uid:f7804727-02ec-474f-b927-f1c4b25ebc89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270083451266880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredn
s-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:02.543905124Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&PodSandboxMetadata{Name:kube-proxy-vg8p5,Uid:af4c8131-921e-411d-853d-135361aa197b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270082777796436,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:28:01.852776460Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16df8b6062c13f
3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-473197,Uid:28dfe4944cd2af53875d3a7e7fc03c39,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270070662520206,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28dfe4944cd2af53875d3a7e7fc03c39,kubernetes.io/config.seen: 2024-09-13T23:27:50.211421635Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-473197,Uid:12dfd4a2e36fe6cb94d70b96d2626ef4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270070659925705,Labels:map[string]string{component: kub
e-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.50:8443,kubernetes.io/config.hash: 12dfd4a2e36fe6cb94d70b96d2626ef4,kubernetes.io/config.seen: 2024-09-13T23:27:50.211420507Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-473197,Uid:cd8bbcdbb1c16b1ba3091d762550f625,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270070658307079,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,tier: control-plane,}
,Annotations:map[string]string{kubernetes.io/config.hash: cd8bbcdbb1c16b1ba3091d762550f625,kubernetes.io/config.seen: 2024-09-13T23:27:50.211422449Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&PodSandboxMetadata{Name:etcd-addons-473197,Uid:6bcd8b95a763196a2a35a097bd5eab7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726270070653802677,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.50:2379,kubernetes.io/config.hash: 6bcd8b95a763196a2a35a097bd5eab7e,kubernetes.io/config.seen: 2024-09-13T23:27:50.211417082Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=06
756005-8b23-48d1-8a4d-541e711ce000 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.968625081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49d9450c-2d96-4817-a838-eb0335da0adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.968680867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49d9450c-2d96-4817-a838-eb0335da0adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:46 addons-473197 crio[661]: time="2024-09-13 23:38:46.969034812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759
a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.po
d.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e,PodSandboxId:6e80ba24feda5f6544885a36aa72f11d5bf8ed598548224041926a68a8c03259,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726270163014285052,Labels:map[string]string{io.kubernetes.container.name: controller,io.k
ubernetes.pod.name: ingress-nginx-controller-bc57996ff-pvkkz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9176c8-fc0c-4357-b26c-f7d80c3527af,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec
{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:
458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636eac7cc
a31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b,PodSandboxId:6f515b34fc6fa04741c4222a812786ed4719d255e79737e40d165dad265996d7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726270108692926746,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3db76d21-1e5d-4ece-8925-c84d0df606bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"prot
ocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d
3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49d9450c-2d96-4817-a838-eb0335da0adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.006870809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=200c8aaf-dd3b-4214-8012-b9558bad170b name=/runtime.v1.RuntimeService/Version
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.006960218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=200c8aaf-dd3b-4214-8012-b9558bad170b name=/runtime.v1.RuntimeService/Version
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.008474572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6aa33b8-e503-4109-bb50-b7ec77a81cd9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.009956067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270727009925851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571195,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6aa33b8-e503-4109-bb50-b7ec77a81cd9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.010484946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cb12df0-6bed-4f2a-a2ad-c8b1751ac175 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.010554977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cb12df0-6bed-4f2a-a2ad-c8b1751ac175 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.011040527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759
a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699e3247ecb4cdb0a6f9ae7ddffd8900b63881d61ed92e915107012d7c0ea5d5,PodSandboxId:08bf80b9d21ebc8567d982196133d52ad2b2d9979b2872c3ad5444c526efa542,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726270681208407739,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes
.pod.uid: 6d5b439c-a1f4-473b-a91a-7cbab80aace0,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b94376dceb0c4059b32a1934926ea14a0e5fe78befbc1750960c67c340eaa1,PodSandboxId:cb47c279354a1938dc15dcc032938f8a59fe2f21bd859bbb9ae48eccd5042791,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726270678705491934,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf
b3767d-477f-4e5a-a747-acb9162d74fc,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes
.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e,PodSandboxId:6e80ba24feda5f6544885a36aa72f11d5bf8ed598548224041926a68a8c03259,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726270163014285052,Labels:map[string]string{io.kubernetes.container.name: controller,i
o.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-pvkkz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9176c8-fc0c-4357-b26c-f7d80c3527af,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:creat
e,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,
Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd
8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b,PodSandboxId:6f515b34fc6fa04741c4222a812786ed4719d255e79737e40d165dad265996d7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726270108692926746,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3db76d21-1e5d-4ece-8925-c84d0df606bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097
bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]
string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cb12df0-6bed-4f2a-a2ad-c8b1751ac175 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.051246925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76417ea8-6495-4d1b-9c0b-8873291f4524 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.051335174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76417ea8-6495-4d1b-9c0b-8873291f4524 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.052653016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85899a3d-7a52-4571-962d-cb87df4c234e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.053820058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270727053786773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571195,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85899a3d-7a52-4571-962d-cb87df4c234e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.054515960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaeef03a-ad7f-42e6-a2b8-1d218ce005f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.054583788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaeef03a-ad7f-42e6-a2b8-1d218ce005f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:38:47 addons-473197 crio[661]: time="2024-09-13 23:38:47.054988674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759
a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699e3247ecb4cdb0a6f9ae7ddffd8900b63881d61ed92e915107012d7c0ea5d5,PodSandboxId:08bf80b9d21ebc8567d982196133d52ad2b2d9979b2872c3ad5444c526efa542,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726270681208407739,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes
.pod.uid: 6d5b439c-a1f4-473b-a91a-7cbab80aace0,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b94376dceb0c4059b32a1934926ea14a0e5fe78befbc1750960c67c340eaa1,PodSandboxId:cb47c279354a1938dc15dcc032938f8a59fe2f21bd859bbb9ae48eccd5042791,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726270678705491934,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf
b3767d-477f-4e5a-a747-acb9162d74fc,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes
.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e,PodSandboxId:6e80ba24feda5f6544885a36aa72f11d5bf8ed598548224041926a68a8c03259,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726270163014285052,Labels:map[string]string{io.kubernetes.container.name: controller,i
o.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-pvkkz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9176c8-fc0c-4357-b26c-f7d80c3527af,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:creat
e,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,
Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd
8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b,PodSandboxId:6f515b34fc6fa04741c4222a812786ed4719d255e79737e40d165dad265996d7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726270108692926746,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3db76d21-1e5d-4ece-8925-c84d0df606bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097
bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]
string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:m
ap[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaeef03a-ad7f-42e6-a2b8-1d218ce005f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97beb09dce981       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 seconds ago       Running             nginx                     0                   e4292583c4fab       nginx
	038624c91b1cd       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        14 seconds ago      Running             headlamp                  0                   2a8766ca0210c       headlamp-57fb76fcdb-z5dzh
	699e3247ecb4c       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             45 seconds ago      Exited              helper-pod                0                   08bf80b9d21eb       helper-pod-delete-pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099
	75b94376dceb0       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                            48 seconds ago      Exited              busybox                   0                   cb47c279354a1       test-local-path
	5196a5dc9c17b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   d580ec2a88560       gcp-auth-89d5ffd79-74znl
	5393c81e3d84a       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   6e80ba24feda5       ingress-nginx-controller-bc57996ff-pvkkz
	fedfdc0baddcb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   52c57a20ae261       ingress-nginx-admission-patch-5bhhr
	603c43ae8b4f5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   248b6de52c586       ingress-nginx-admission-create-nw7k5
	04e992df68051       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        9 minutes ago       Running             metrics-server            0                   dc9bf0b998e05       metrics-server-84c5f94fbc-2rwbq
	bd8804d28cfdd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago       Running             local-path-provisioner    0                   458dcb49d1f7b       local-path-provisioner-86d989889c-5c8rt
	636eac7cca31b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   6f515b34fc6fa       kube-ingress-dns-minikube
	c9b12f34bf4ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   b5e0a2e4aa643       storage-provisioner
	d89a21338611a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   a8e55428ab347       coredns-7c65d6cfc9-kx4xn
	83331cb3777f3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   f7778cd3a139f       kube-proxy-vg8p5
	04477f2de3ed2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   c0df35fa7a533       etcd-addons-473197
	56e77d112c7cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago      Running             kube-apiserver            0                   6a9771749e8e5       kube-apiserver-addons-473197
	6d8bc098317b8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago      Running             kube-scheduler            0                   555adaf092a3a       kube-scheduler-addons-473197
	5654029eb497f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago      Running             kube-controller-manager   0                   16df8b6062c13       kube-controller-manager-addons-473197
	
	
	==> coredns [d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7] <==
	[INFO] 127.0.0.1:45670 - 7126 "HINFO IN 5243104806893607912.7915310536040454133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013008283s
	[INFO] 10.244.0.7:35063 - 39937 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000380883s
	[INFO] 10.244.0.7:35063 - 43782 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014847s
	[INFO] 10.244.0.7:57829 - 35566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163865s
	[INFO] 10.244.0.7:57829 - 30448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000958s
	[INFO] 10.244.0.7:39015 - 39866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132201s
	[INFO] 10.244.0.7:39015 - 60863 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107562s
	[INFO] 10.244.0.7:58981 - 30723 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162373s
	[INFO] 10.244.0.7:58981 - 46338 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00022074s
	[INFO] 10.244.0.7:42427 - 30557 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119811s
	[INFO] 10.244.0.7:42427 - 64858 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194198s
	[INFO] 10.244.0.7:47702 - 27656 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006553s
	[INFO] 10.244.0.7:47702 - 4878 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042687s
	[INFO] 10.244.0.7:44162 - 12670 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051358s
	[INFO] 10.244.0.7:44162 - 55416 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106292s
	[INFO] 10.244.0.7:42573 - 35758 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040379s
	[INFO] 10.244.0.7:42573 - 45232 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000289788s
	[INFO] 10.244.0.22:35446 - 19101 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000568711s
	[INFO] 10.244.0.22:46347 - 39209 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000700369s
	[INFO] 10.244.0.22:55127 - 33729 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167148s
	[INFO] 10.244.0.22:59606 - 29197 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000295747s
	[INFO] 10.244.0.22:59298 - 45525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000336329s
	[INFO] 10.244.0.22:46438 - 8493 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150611s
	[INFO] 10.244.0.22:45134 - 55606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995828s
	[INFO] 10.244.0.22:56372 - 20336 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001287124s
	
	
	==> describe nodes <==
	Name:               addons-473197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-473197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-473197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-473197
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-473197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:38:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:38:08 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:38:08 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:38:08 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:38:08 +0000   Fri, 13 Sep 2024 23:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-473197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a5e8d89e8ad43a6a8c642064226a573
	  System UUID:                2a5e8d89-e8ad-43a6-a8c6-42064226a573
	  Boot ID:                    f73ad719-e78b-4b75-b596-4b22311bf8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  gcp-auth                    gcp-auth-89d5ffd79-74znl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  headlamp                    headlamp-57fb76fcdb-z5dzh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-pvkkz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-kx4xn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-473197                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-473197                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-473197       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vg8p5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-473197                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-2rwbq             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-5c8rt     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-473197 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-473197 event: Registered Node addons-473197 in Controller
	  Normal  CIDRAssignmentFailed     10m                cidrAllocator    Node addons-473197 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +7.959701] kauditd_printk_skb: 63 callbacks suppressed
	[ +13.171374] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.767727] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.860640] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 23:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.350538] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.114970] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.355485] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.753980] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.472896] kauditd_printk_skb: 14 callbacks suppressed
	[ +24.455652] kauditd_printk_skb: 32 callbacks suppressed
	[Sep13 23:30] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:37] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.847903] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.069379] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.043967] kauditd_printk_skb: 10 callbacks suppressed
	[Sep13 23:38] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.853283] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.843077] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.344633] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.164878] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.255016] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.389323] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4] <==
	{"level":"info","ts":"2024-09-13T23:29:25.102745Z","caller":"traceutil/trace.go:171","msg":"trace[1515683131] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"486.960774ms","start":"2024-09-13T23:29:24.615774Z","end":"2024-09-13T23:29:25.102735Z","steps":["trace[1515683131] 'agreement among raft nodes before linearized reading'  (duration: 485.136164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:29:25.102795Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:29:24.615744Z","time spent":"487.034964ms","remote":"127.0.0.1:52980","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-13T23:29:25.100945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.060995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:29:25.102942Z","caller":"traceutil/trace.go:171","msg":"trace[320340025] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"153.055279ms","start":"2024-09-13T23:29:24.949881Z","end":"2024-09-13T23:29:25.102936Z","steps":["trace[320340025] 'agreement among raft nodes before linearized reading'  (duration: 151.055187ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:30:00.891022Z","caller":"traceutil/trace.go:171","msg":"trace[1266275664] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"219.390831ms","start":"2024-09-13T23:30:00.671602Z","end":"2024-09-13T23:30:00.890993Z","steps":["trace[1266275664] 'process raft request'  (duration: 219.010151ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:51.889897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1557}
	{"level":"info","ts":"2024-09-13T23:37:51.938422Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1557,"took":"47.925054ms","hash":4240063649,"current-db-size-bytes":6725632,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3678208,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-09-13T23:37:51.938492Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4240063649,"revision":1557,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T23:37:56.077320Z","caller":"traceutil/trace.go:171","msg":"trace[2140372478] linearizableReadLoop","detail":"{readStateIndex:2209; appliedIndex:2208; }","duration":"247.359784ms","start":"2024-09-13T23:37:55.829919Z","end":"2024-09-13T23:37:56.077279Z","steps":["trace[2140372478] 'read index received'  (duration: 247.248443ms)","trace[2140372478] 'applied index is now lower than readState.Index'  (duration: 110.59µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:37:56.077451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.493847ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:37:56.077484Z","caller":"traceutil/trace.go:171","msg":"trace[388607265] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2064; }","duration":"247.558448ms","start":"2024-09-13T23:37:55.829913Z","end":"2024-09-13T23:37:56.077472Z","steps":["trace[388607265] 'agreement among raft nodes before linearized reading'  (duration: 247.477707ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:56.077636Z","caller":"traceutil/trace.go:171","msg":"trace[772616711] transaction","detail":"{read_only:false; response_revision:2064; number_of_response:1; }","duration":"342.562437ms","start":"2024-09-13T23:37:55.735053Z","end":"2024-09-13T23:37:56.077616Z","steps":["trace[772616711] 'process raft request'  (duration: 342.117628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:37:56.077806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:37:55.735020Z","time spent":"342.655019ms","remote":"127.0.0.1:53072","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-13T23:38:26.271493Z","caller":"traceutil/trace.go:171","msg":"trace[2131628066] linearizableReadLoop","detail":"{readStateIndex:2494; appliedIndex:2493; }","duration":"108.567306ms","start":"2024-09-13T23:38:26.162913Z","end":"2024-09-13T23:38:26.271481Z","steps":["trace[2131628066] 'read index received'  (duration: 108.433015ms)","trace[2131628066] 'applied index is now lower than readState.Index'  (duration: 133.742µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:38:26.271587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.679598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:26.271607Z","caller":"traceutil/trace.go:171","msg":"trace[907710598] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-attacher; range_end:; response_count:0; response_revision:2337; }","duration":"108.715806ms","start":"2024-09-13T23:38:26.162886Z","end":"2024-09-13T23:38:26.271602Z","steps":["trace[907710598] 'agreement among raft nodes before linearized reading'  (duration: 108.663744ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:26.271800Z","caller":"traceutil/trace.go:171","msg":"trace[1022302903] transaction","detail":"{read_only:false; response_revision:2337; number_of_response:1; }","duration":"163.9076ms","start":"2024-09-13T23:38:26.107885Z","end":"2024-09-13T23:38:26.271793Z","steps":["trace[1022302903] 'process raft request'  (duration: 163.492838ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:33.012093Z","caller":"traceutil/trace.go:171","msg":"trace[1999613905] linearizableReadLoop","detail":"{readStateIndex:2536; appliedIndex:2535; }","duration":"332.084954ms","start":"2024-09-13T23:38:32.679984Z","end":"2024-09-13T23:38:33.012069Z","steps":["trace[1999613905] 'read index received'  (duration: 331.823648ms)","trace[1999613905] 'applied index is now lower than readState.Index'  (duration: 260.868µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T23:38:33.012294Z","caller":"traceutil/trace.go:171","msg":"trace[1261167858] transaction","detail":"{read_only:false; response_revision:2376; number_of_response:1; }","duration":"410.968582ms","start":"2024-09-13T23:38:32.601315Z","end":"2024-09-13T23:38:33.012284Z","steps":["trace[1261167858] 'process raft request'  (duration: 410.572548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.21368ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012477Z","caller":"traceutil/trace.go:171","msg":"trace[420653707] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2376; }","duration":"182.278804ms","start":"2024-09-13T23:38:32.830178Z","end":"2024-09-13T23:38:33.012457Z","steps":["trace[420653707] 'agreement among raft nodes before linearized reading'  (duration: 182.193813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.601291Z","time spent":"411.032114ms","remote":"127.0.0.1:52964","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2374 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-13T23:38:33.012625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.637201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012648Z","caller":"traceutil/trace.go:171","msg":"trace[1394512193] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2376; }","duration":"332.657728ms","start":"2024-09-13T23:38:32.679980Z","end":"2024-09-13T23:38:33.012638Z","steps":["trace[1394512193] 'agreement among raft nodes before linearized reading'  (duration: 332.619348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.679948Z","time spent":"332.7162ms","remote":"127.0.0.1:52786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> gcp-auth [5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299] <==
	2024/09/13 23:29:31 Ready to write response ...
	2024/09/13 23:29:31 Ready to marshal response ...
	2024/09/13 23:29:31 Ready to write response ...
	2024/09/13 23:37:39 Ready to marshal response ...
	2024/09/13 23:37:39 Ready to write response ...
	2024/09/13 23:37:45 Ready to marshal response ...
	2024/09/13 23:37:45 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:47 Ready to marshal response ...
	2024/09/13 23:37:47 Ready to write response ...
	2024/09/13 23:38:00 Ready to marshal response ...
	2024/09/13 23:38:00 Ready to write response ...
	2024/09/13 23:38:11 Ready to marshal response ...
	2024/09/13 23:38:11 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:40 Ready to marshal response ...
	2024/09/13 23:38:40 Ready to write response ...
	
	
	==> kernel <==
	 23:38:47 up 11 min,  0 users,  load average: 0.59, 0.63, 0.56
	Linux addons-473197 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0913 23:29:58.924676       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.102.69:443: connect: connection refused" logger="UnhandledError"
	E0913 23:29:58.955657       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 23:29:58.960975       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0913 23:38:03.472937       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0913 23:38:24.878230       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.178.54"}
	I0913 23:38:28.146928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.146968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.188882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.188920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.207934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.207989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.290379       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.290409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.311424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.311452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 23:38:29.290607       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0913 23:38:29.311717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 23:38:29.343244       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0913 23:38:35.108228       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 23:38:36.237049       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 23:38:40.565186       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 23:38:40.744026       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.251.250"}
	
	
	==> kube-controller-manager [5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd] <==
	I0913 23:38:32.185095       1 shared_informer.go:320] Caches are synced for garbage collector
	W0913 23:38:32.540527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:32.540569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:33.244938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:33.245017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:38:33.383679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="78.831µs"
	I0913 23:38:33.428295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="16.797301ms"
	I0913 23:38:33.428613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="91.959µs"
	W0913 23:38:35.957333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:35.957397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0913 23:38:36.238894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:36.762255       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:36.762306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:37.083608       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:37.083661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:38.071377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:38.071423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:40.225192       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:40.225253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:44.186778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:44.186816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:38:45.295803       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0913 23:38:45.906946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.473µs"
	W0913 23:38:47.464555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:47.464603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:28:04.380224       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:28:04.489950       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.50"]
	E0913 23:28:04.490030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:28:04.594464       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:28:04.594495       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:28:04.594519       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:28:04.603873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:28:04.604221       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:28:04.604252       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:28:04.605991       1 config.go:199] "Starting service config controller"
	I0913 23:28:04.606001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:28:04.606031       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:28:04.606036       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:28:04.618190       1 config.go:328] "Starting node config controller"
	I0913 23:28:04.618220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:28:04.706337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:28:04.706402       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:28:04.718993       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f] <==
	W0913 23:27:54.609234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 23:27:54.609344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.615180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:27:54.615314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.634487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:54.634695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.650017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 23:27:54.650225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.663547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 23:27:54.663702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.739538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 23:27:54.739633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 23:27:54.802534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 23:27:54.802645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.915039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 23:27:54.915259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.056348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.056469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.122788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.122892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.209039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.209209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 23:27:57.297586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.583856    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b14417f-5a44-4a3b-a4a7-0c62731c2ead-gcp-creds\") pod \"7b14417f-5a44-4a3b-a4a7-0c62731c2ead\" (UID: \"7b14417f-5a44-4a3b-a4a7-0c62731c2ead\") "
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.584447    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b14417f-5a44-4a3b-a4a7-0c62731c2ead-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7b14417f-5a44-4a3b-a4a7-0c62731c2ead" (UID: "7b14417f-5a44-4a3b-a4a7-0c62731c2ead"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.593175    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b14417f-5a44-4a3b-a4a7-0c62731c2ead-kube-api-access-wpq7k" (OuterVolumeSpecName: "kube-api-access-wpq7k") pod "7b14417f-5a44-4a3b-a4a7-0c62731c2ead" (UID: "7b14417f-5a44-4a3b-a4a7-0c62731c2ead"). InnerVolumeSpecName "kube-api-access-wpq7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.684690    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wpq7k\" (UniqueName: \"kubernetes.io/projected/7b14417f-5a44-4a3b-a4a7-0c62731c2ead-kube-api-access-wpq7k\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.684734    1197 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b14417f-5a44-4a3b-a4a7-0c62731c2ead-gcp-creds\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:38:45 addons-473197 kubelet[1197]: I0913 23:38:45.806801    1197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=2.258244142 podStartE2EDuration="5.806778594s" podCreationTimestamp="2024-09-13 23:38:40 +0000 UTC" firstStartedPulling="2024-09-13 23:38:41.227664586 +0000 UTC m=+644.388635687" lastFinishedPulling="2024-09-13 23:38:44.776199039 +0000 UTC m=+647.937170139" observedRunningTime="2024-09-13 23:38:45.491519141 +0000 UTC m=+648.652490263" watchObservedRunningTime="2024-09-13 23:38:45.806778594 +0000 UTC m=+648.967749695"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.292485    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6znc\" (UniqueName: \"kubernetes.io/projected/7b0c1721-acbc-44f4-81ce-3918399c4448-kube-api-access-t6znc\") pod \"7b0c1721-acbc-44f4-81ce-3918399c4448\" (UID: \"7b0c1721-acbc-44f4-81ce-3918399c4448\") "
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.304555    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0c1721-acbc-44f4-81ce-3918399c4448-kube-api-access-t6znc" (OuterVolumeSpecName: "kube-api-access-t6znc") pod "7b0c1721-acbc-44f4-81ce-3918399c4448" (UID: "7b0c1721-acbc-44f4-81ce-3918399c4448"). InnerVolumeSpecName "kube-api-access-t6znc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.393864    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldghh\" (UniqueName: \"kubernetes.io/projected/8031cc7e-4d9b-4151-bca2-ec5eda26c3c3-kube-api-access-ldghh\") pod \"8031cc7e-4d9b-4151-bca2-ec5eda26c3c3\" (UID: \"8031cc7e-4d9b-4151-bca2-ec5eda26c3c3\") "
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.393978    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t6znc\" (UniqueName: \"kubernetes.io/projected/7b0c1721-acbc-44f4-81ce-3918399c4448-kube-api-access-t6znc\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.396992    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8031cc7e-4d9b-4151-bca2-ec5eda26c3c3-kube-api-access-ldghh" (OuterVolumeSpecName: "kube-api-access-ldghh") pod "8031cc7e-4d9b-4151-bca2-ec5eda26c3c3" (UID: "8031cc7e-4d9b-4151-bca2-ec5eda26c3c3"). InnerVolumeSpecName "kube-api-access-ldghh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.489308    1197 scope.go:117] "RemoveContainer" containerID="814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.495371    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ldghh\" (UniqueName: \"kubernetes.io/projected/8031cc7e-4d9b-4151-bca2-ec5eda26c3c3-kube-api-access-ldghh\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.541871    1197 scope.go:117] "RemoveContainer" containerID="814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: E0913 23:38:46.542633    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67\": container with ID starting with 814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67 not found: ID does not exist" containerID="814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.542663    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67"} err="failed to get container status \"814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67\": rpc error: code = NotFound desc = could not find container \"814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67\": container with ID starting with 814cb608e7ef49bcc0009e506d76cc6a17f56d09399888eddf478632cb63cb67 not found: ID does not exist"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.542686    1197 scope.go:117] "RemoveContainer" containerID="1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.565407    1197 scope.go:117] "RemoveContainer" containerID="1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: E0913 23:38:46.565898    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633\": container with ID starting with 1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633 not found: ID does not exist" containerID="1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.565930    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633"} err="failed to get container status \"1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633\": rpc error: code = NotFound desc = could not find container \"1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633\": container with ID starting with 1c10000d1436e2f366f00c783f73d382074672de235a2f819de40de99d0e0633 not found: ID does not exist"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.969967    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0c1721-acbc-44f4-81ce-3918399c4448" path="/var/lib/kubelet/pods/7b0c1721-acbc-44f4-81ce-3918399c4448/volumes"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.970801    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b14417f-5a44-4a3b-a4a7-0c62731c2ead" path="/var/lib/kubelet/pods/7b14417f-5a44-4a3b-a4a7-0c62731c2ead/volumes"
	Sep 13 23:38:46 addons-473197 kubelet[1197]: I0913 23:38:46.971486    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8031cc7e-4d9b-4151-bca2-ec5eda26c3c3" path="/var/lib/kubelet/pods/8031cc7e-4d9b-4151-bca2-ec5eda26c3c3/volumes"
	Sep 13 23:38:47 addons-473197 kubelet[1197]: E0913 23:38:47.546261    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270727545282871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571195,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:38:47 addons-473197 kubelet[1197]: E0913 23:38:47.546295    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270727545282871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571195,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d] <==
	I0913 23:28:10.804057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:28:11.078500       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:28:11.078567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:28:11.120016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:28:11.124355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c048df4-0a4e-4b96-9f0e-8fcf6762cf64", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8 became leader
	I0913 23:28:11.124757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	I0913 23:28:11.226238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-473197 -n addons-473197
helpers_test.go:261: (dbg) Run:  kubectl --context addons-473197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-nw7k5 ingress-nginx-admission-patch-5bhhr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-473197 describe pod busybox ingress-nginx-admission-create-nw7k5 ingress-nginx-admission-patch-5bhhr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-473197 describe pod busybox ingress-nginx-admission-create-nw7k5 ingress-nginx-admission-patch-5bhhr: exit status 1 (69.035537ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-473197/192.168.39.50
	Start Time:       Fri, 13 Sep 2024 23:29:31 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj4pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nj4pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-473197
	  Normal   Pulling    7m44s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nw7k5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5bhhr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-473197 describe pod busybox ingress-nginx-admission-create-nw7k5 ingress-nginx-admission-patch-5bhhr: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-473197 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-473197 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-473197 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4959ba5e-162e-43f3-987e-5dc829126b9d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4959ba5e-162e-43f3-987e-5dc829126b9d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004478098s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-473197 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.380901359s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-473197 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 addons disable ingress-dns --alsologtostderr -v=1: (1.51450515s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 addons disable ingress --alsologtostderr -v=1: (7.721972078s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-473197 -n addons-473197
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 logs -n 25: (1.314892024s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-763760                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-551384                                                                     | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-763760                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | binary-mirror-510431                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40845                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510431                                                                     | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-473197 --wait=true                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:37 UTC | 13 Sep 24 23:37 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-473197 ssh cat                                                                       | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-473197 ip                                                                            | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-473197 ssh curl -s                                                                   | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-473197 ip                                                                            | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:27:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:27:19.727478   13355 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:27:19.727577   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727584   13355 out.go:358] Setting ErrFile to fd 2...
	I0913 23:27:19.727589   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727825   13355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:27:19.728488   13355 out.go:352] Setting JSON to false
	I0913 23:27:19.729317   13355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":586,"bootTime":1726269454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:27:19.729406   13355 start.go:139] virtualization: kvm guest
	I0913 23:27:19.731822   13355 out.go:177] * [addons-473197] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:27:19.733210   13355 notify.go:220] Checking for updates...
	I0913 23:27:19.733237   13355 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:27:19.734712   13355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:27:19.735976   13355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:27:19.737182   13355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:19.738438   13355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:27:19.739925   13355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:27:19.741131   13355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:27:19.775615   13355 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 23:27:19.777213   13355 start.go:297] selected driver: kvm2
	I0913 23:27:19.777235   13355 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:27:19.777247   13355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:27:19.777996   13355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.778088   13355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:27:19.793811   13355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:27:19.793861   13355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:27:19.794087   13355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:27:19.794117   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:19.794161   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:19.794171   13355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:27:19.794217   13355 start.go:340] cluster config:
	{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:19.794313   13355 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.796337   13355 out.go:177] * Starting "addons-473197" primary control-plane node in "addons-473197" cluster
	I0913 23:27:19.797380   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:19.797422   13355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:19.797444   13355 cache.go:56] Caching tarball of preloaded images
	I0913 23:27:19.797531   13355 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:27:19.797549   13355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:27:19.797846   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:19.797865   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json: {Name:mkc3a28348c95a05c47c4230656de6866b98328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:19.798004   13355 start.go:360] acquireMachinesLock for addons-473197: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:27:19.798046   13355 start.go:364] duration metric: took 28.71µs to acquireMachinesLock for "addons-473197"
	I0913 23:27:19.798062   13355 start.go:93] Provisioning new machine with config: &{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:27:19.798113   13355 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 23:27:19.799714   13355 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 23:27:19.799890   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:27:19.799928   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:27:19.814905   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0913 23:27:19.815364   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:27:19.815966   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:27:19.815989   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:27:19.816395   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:27:19.816630   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:19.816779   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:19.816997   13355 start.go:159] libmachine.API.Create for "addons-473197" (driver="kvm2")
	I0913 23:27:19.817032   13355 client.go:168] LocalClient.Create starting
	I0913 23:27:19.817080   13355 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:27:19.909228   13355 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:27:19.970689   13355 main.go:141] libmachine: Running pre-create checks...
	I0913 23:27:19.970714   13355 main.go:141] libmachine: (addons-473197) Calling .PreCreateCheck
	I0913 23:27:19.971194   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:19.971662   13355 main.go:141] libmachine: Creating machine...
	I0913 23:27:19.971677   13355 main.go:141] libmachine: (addons-473197) Calling .Create
	I0913 23:27:19.971844   13355 main.go:141] libmachine: (addons-473197) Creating KVM machine...
	I0913 23:27:19.973234   13355 main.go:141] libmachine: (addons-473197) DBG | found existing default KVM network
	I0913 23:27:19.974016   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:19.973849   13377 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0913 23:27:19.974095   13355 main.go:141] libmachine: (addons-473197) DBG | created network xml: 
	I0913 23:27:19.974122   13355 main.go:141] libmachine: (addons-473197) DBG | <network>
	I0913 23:27:19.974136   13355 main.go:141] libmachine: (addons-473197) DBG |   <name>mk-addons-473197</name>
	I0913 23:27:19.974149   13355 main.go:141] libmachine: (addons-473197) DBG |   <dns enable='no'/>
	I0913 23:27:19.974157   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974171   13355 main.go:141] libmachine: (addons-473197) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 23:27:19.974179   13355 main.go:141] libmachine: (addons-473197) DBG |     <dhcp>
	I0913 23:27:19.974184   13355 main.go:141] libmachine: (addons-473197) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 23:27:19.974189   13355 main.go:141] libmachine: (addons-473197) DBG |     </dhcp>
	I0913 23:27:19.974194   13355 main.go:141] libmachine: (addons-473197) DBG |   </ip>
	I0913 23:27:19.974216   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974226   13355 main.go:141] libmachine: (addons-473197) DBG | </network>
	I0913 23:27:19.974233   13355 main.go:141] libmachine: (addons-473197) DBG | 
	I0913 23:27:19.980176   13355 main.go:141] libmachine: (addons-473197) DBG | trying to create private KVM network mk-addons-473197 192.168.39.0/24...
	I0913 23:27:20.045910   13355 main.go:141] libmachine: (addons-473197) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.045940   13355 main.go:141] libmachine: (addons-473197) DBG | private KVM network mk-addons-473197 192.168.39.0/24 created
	I0913 23:27:20.045954   13355 main.go:141] libmachine: (addons-473197) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:27:20.046047   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.045834   13377 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.046087   13355 main.go:141] libmachine: (addons-473197) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:27:20.298677   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.298568   13377 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa...
	I0913 23:27:20.458808   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458662   13377 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk...
	I0913 23:27:20.458837   13355 main.go:141] libmachine: (addons-473197) DBG | Writing magic tar header
	I0913 23:27:20.458849   13355 main.go:141] libmachine: (addons-473197) DBG | Writing SSH key tar header
	I0913 23:27:20.458859   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458774   13377 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.458873   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197
	I0913 23:27:20.458907   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 (perms=drwx------)
	I0913 23:27:20.458937   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:27:20.458947   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:27:20.458964   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.458975   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:27:20.458985   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:27:20.459015   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:27:20.459028   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:27:20.459044   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:27:20.459058   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:27:20.459067   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home
	I0913 23:27:20.459081   13355 main.go:141] libmachine: (addons-473197) DBG | Skipping /home - not owner
	I0913 23:27:20.459096   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:27:20.459111   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:20.459993   13355 main.go:141] libmachine: (addons-473197) define libvirt domain using xml: 
	I0913 23:27:20.460017   13355 main.go:141] libmachine: (addons-473197) <domain type='kvm'>
	I0913 23:27:20.460026   13355 main.go:141] libmachine: (addons-473197)   <name>addons-473197</name>
	I0913 23:27:20.460037   13355 main.go:141] libmachine: (addons-473197)   <memory unit='MiB'>4000</memory>
	I0913 23:27:20.460042   13355 main.go:141] libmachine: (addons-473197)   <vcpu>2</vcpu>
	I0913 23:27:20.460054   13355 main.go:141] libmachine: (addons-473197)   <features>
	I0913 23:27:20.460079   13355 main.go:141] libmachine: (addons-473197)     <acpi/>
	I0913 23:27:20.460098   13355 main.go:141] libmachine: (addons-473197)     <apic/>
	I0913 23:27:20.460109   13355 main.go:141] libmachine: (addons-473197)     <pae/>
	I0913 23:27:20.460119   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460142   13355 main.go:141] libmachine: (addons-473197)   </features>
	I0913 23:27:20.460165   13355 main.go:141] libmachine: (addons-473197)   <cpu mode='host-passthrough'>
	I0913 23:27:20.460178   13355 main.go:141] libmachine: (addons-473197)   
	I0913 23:27:20.460200   13355 main.go:141] libmachine: (addons-473197)   </cpu>
	I0913 23:27:20.460208   13355 main.go:141] libmachine: (addons-473197)   <os>
	I0913 23:27:20.460213   13355 main.go:141] libmachine: (addons-473197)     <type>hvm</type>
	I0913 23:27:20.460220   13355 main.go:141] libmachine: (addons-473197)     <boot dev='cdrom'/>
	I0913 23:27:20.460226   13355 main.go:141] libmachine: (addons-473197)     <boot dev='hd'/>
	I0913 23:27:20.460238   13355 main.go:141] libmachine: (addons-473197)     <bootmenu enable='no'/>
	I0913 23:27:20.460250   13355 main.go:141] libmachine: (addons-473197)   </os>
	I0913 23:27:20.460265   13355 main.go:141] libmachine: (addons-473197)   <devices>
	I0913 23:27:20.460282   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='cdrom'>
	I0913 23:27:20.460301   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/boot2docker.iso'/>
	I0913 23:27:20.460326   13355 main.go:141] libmachine: (addons-473197)       <target dev='hdc' bus='scsi'/>
	I0913 23:27:20.460339   13355 main.go:141] libmachine: (addons-473197)       <readonly/>
	I0913 23:27:20.460345   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460351   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='disk'>
	I0913 23:27:20.460361   13355 main.go:141] libmachine: (addons-473197)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:27:20.460368   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk'/>
	I0913 23:27:20.460375   13355 main.go:141] libmachine: (addons-473197)       <target dev='hda' bus='virtio'/>
	I0913 23:27:20.460379   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460385   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460390   13355 main.go:141] libmachine: (addons-473197)       <source network='mk-addons-473197'/>
	I0913 23:27:20.460397   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460401   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460408   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460413   13355 main.go:141] libmachine: (addons-473197)       <source network='default'/>
	I0913 23:27:20.460419   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460424   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460430   13355 main.go:141] libmachine: (addons-473197)     <serial type='pty'>
	I0913 23:27:20.460446   13355 main.go:141] libmachine: (addons-473197)       <target port='0'/>
	I0913 23:27:20.460463   13355 main.go:141] libmachine: (addons-473197)     </serial>
	I0913 23:27:20.460475   13355 main.go:141] libmachine: (addons-473197)     <console type='pty'>
	I0913 23:27:20.460492   13355 main.go:141] libmachine: (addons-473197)       <target type='serial' port='0'/>
	I0913 23:27:20.460504   13355 main.go:141] libmachine: (addons-473197)     </console>
	I0913 23:27:20.460514   13355 main.go:141] libmachine: (addons-473197)     <rng model='virtio'>
	I0913 23:27:20.460527   13355 main.go:141] libmachine: (addons-473197)       <backend model='random'>/dev/random</backend>
	I0913 23:27:20.460540   13355 main.go:141] libmachine: (addons-473197)     </rng>
	I0913 23:27:20.460548   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460554   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460564   13355 main.go:141] libmachine: (addons-473197)   </devices>
	I0913 23:27:20.460574   13355 main.go:141] libmachine: (addons-473197) </domain>
	I0913 23:27:20.460592   13355 main.go:141] libmachine: (addons-473197) 
	I0913 23:27:20.466244   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:75:c0:ca in network default
	I0913 23:27:20.467639   13355 main.go:141] libmachine: (addons-473197) Ensuring networks are active...
	I0913 23:27:20.467669   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:20.468356   13355 main.go:141] libmachine: (addons-473197) Ensuring network default is active
	I0913 23:27:20.468605   13355 main.go:141] libmachine: (addons-473197) Ensuring network mk-addons-473197 is active
	I0913 23:27:20.469014   13355 main.go:141] libmachine: (addons-473197) Getting domain xml...
	I0913 23:27:20.469710   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:21.903658   13355 main.go:141] libmachine: (addons-473197) Waiting to get IP...
	I0913 23:27:21.904363   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:21.904874   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:21.904902   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:21.904817   13377 retry.go:31] will retry after 304.697765ms: waiting for machine to come up
	I0913 23:27:22.211392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.211878   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.211895   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.211847   13377 retry.go:31] will retry after 296.206544ms: waiting for machine to come up
	I0913 23:27:22.509388   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.510038   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.510074   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.509984   13377 retry.go:31] will retry after 351.816954ms: waiting for machine to come up
	I0913 23:27:22.863507   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.863981   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.864012   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.863920   13377 retry.go:31] will retry after 530.240488ms: waiting for machine to come up
	I0913 23:27:23.395630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.396082   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.396145   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.396069   13377 retry.go:31] will retry after 548.533639ms: waiting for machine to come up
	I0913 23:27:23.945981   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.946426   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.946449   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.946390   13377 retry.go:31] will retry after 804.440442ms: waiting for machine to come up
	I0913 23:27:24.752386   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:24.752879   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:24.752901   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:24.752819   13377 retry.go:31] will retry after 784.165086ms: waiting for machine to come up
	I0913 23:27:25.538164   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:25.538541   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:25.538565   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:25.538498   13377 retry.go:31] will retry after 1.081622308s: waiting for machine to come up
	I0913 23:27:26.621460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:26.621931   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:26.621955   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:26.621857   13377 retry.go:31] will retry after 1.731303856s: waiting for machine to come up
	I0913 23:27:28.354521   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:28.355071   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:28.355099   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:28.355009   13377 retry.go:31] will retry after 1.496214945s: waiting for machine to come up
	I0913 23:27:29.852809   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:29.853265   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:29.853301   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:29.853227   13377 retry.go:31] will retry after 2.460158583s: waiting for machine to come up
	I0913 23:27:32.316929   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:32.317410   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:32.317431   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:32.317373   13377 retry.go:31] will retry after 3.034476235s: waiting for machine to come up
	I0913 23:27:35.353176   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:35.353654   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:35.353699   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:35.353589   13377 retry.go:31] will retry after 4.290331524s: waiting for machine to come up
	I0913 23:27:39.649352   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650002   13355 main.go:141] libmachine: (addons-473197) Found IP for machine: 192.168.39.50
	I0913 23:27:39.650019   13355 main.go:141] libmachine: (addons-473197) Reserving static IP address...
	I0913 23:27:39.650027   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has current primary IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650461   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find host DHCP lease matching {name: "addons-473197", mac: "52:54:00:2d:a5:2e", ip: "192.168.39.50"} in network mk-addons-473197
	I0913 23:27:39.721216   13355 main.go:141] libmachine: (addons-473197) DBG | Getting to WaitForSSH function...
	I0913 23:27:39.721243   13355 main.go:141] libmachine: (addons-473197) Reserved static IP address: 192.168.39.50
	I0913 23:27:39.721278   13355 main.go:141] libmachine: (addons-473197) Waiting for SSH to be available...
	I0913 23:27:39.723998   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724611   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.724638   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724950   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH client type: external
	I0913 23:27:39.724977   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa (-rw-------)
	I0913 23:27:39.725008   13355 main.go:141] libmachine: (addons-473197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:27:39.725021   13355 main.go:141] libmachine: (addons-473197) DBG | About to run SSH command:
	I0913 23:27:39.725036   13355 main.go:141] libmachine: (addons-473197) DBG | exit 0
	I0913 23:27:39.855960   13355 main.go:141] libmachine: (addons-473197) DBG | SSH cmd err, output: <nil>: 
	I0913 23:27:39.856254   13355 main.go:141] libmachine: (addons-473197) KVM machine creation complete!
	I0913 23:27:39.856646   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:39.857244   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857451   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857626   13355 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:27:39.857643   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:27:39.858795   13355 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:27:39.858808   13355 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:27:39.858813   13355 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:27:39.858832   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.861250   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861689   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.861723   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861906   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.862060   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862212   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862395   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.862569   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.862742   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.862751   13355 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:27:39.967145   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:39.967169   13355 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:27:39.967179   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.969704   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970052   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.970076   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970268   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.970477   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970645   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970782   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.970951   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.971103   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.971115   13355 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:27:40.076316   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:27:40.076451   13355 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:27:40.076469   13355 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:27:40.076484   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076736   13355 buildroot.go:166] provisioning hostname "addons-473197"
	I0913 23:27:40.076759   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076929   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.079647   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080051   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.080075   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080207   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.080376   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080576   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080715   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.080902   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.081066   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.081078   13355 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-473197 && echo "addons-473197" | sudo tee /etc/hostname
	I0913 23:27:40.201203   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-473197
	
	I0913 23:27:40.201232   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.203941   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204266   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.204295   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204445   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.204612   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204717   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204938   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.205096   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.205257   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.205288   13355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-473197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-473197/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-473197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:27:40.315830   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:40.315864   13355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:27:40.315886   13355 buildroot.go:174] setting up certificates
	I0913 23:27:40.315900   13355 provision.go:84] configureAuth start
	I0913 23:27:40.315916   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.316174   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:40.318560   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.318909   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.318938   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.319047   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.320812   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321063   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.321089   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321172   13355 provision.go:143] copyHostCerts
	I0913 23:27:40.321244   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:27:40.321370   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:27:40.321425   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:27:40.321473   13355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.addons-473197 san=[127.0.0.1 192.168.39.50 addons-473197 localhost minikube]
	I0913 23:27:40.603148   13355 provision.go:177] copyRemoteCerts
	I0913 23:27:40.603210   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:27:40.603234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.606258   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.606705   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.606739   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.607033   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.607251   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.607362   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.607463   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:40.689713   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:27:40.712453   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:27:40.735387   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:27:40.757966   13355 provision.go:87] duration metric: took 442.049406ms to configureAuth
	I0913 23:27:40.758001   13355 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:27:40.758169   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:27:40.758238   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.760689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761096   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.761116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761352   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.761591   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761778   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.762072   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.762249   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.762265   13355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:27:40.978781   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:27:40.978810   13355 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:27:40.978820   13355 main.go:141] libmachine: (addons-473197) Calling .GetURL
	I0913 23:27:40.980184   13355 main.go:141] libmachine: (addons-473197) DBG | Using libvirt version 6000000
	I0913 23:27:40.982058   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982375   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.982407   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982552   13355 main.go:141] libmachine: Docker is up and running!
	I0913 23:27:40.982564   13355 main.go:141] libmachine: Reticulating splines...
	I0913 23:27:40.982573   13355 client.go:171] duration metric: took 21.165531853s to LocalClient.Create
	I0913 23:27:40.982600   13355 start.go:167] duration metric: took 21.165604233s to libmachine.API.Create "addons-473197"
	I0913 23:27:40.982612   13355 start.go:293] postStartSetup for "addons-473197" (driver="kvm2")
	I0913 23:27:40.982626   13355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:40.982643   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:40.982883   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:40.982909   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.985049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985372   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.985397   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985529   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.985759   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.985932   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.986038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.069472   13355 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:27:41.073428   13355 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:27:41.073453   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:27:41.073517   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:27:41.073538   13355 start.go:296] duration metric: took 90.917797ms for postStartSetup
	I0913 23:27:41.073579   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:41.074107   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.077174   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.077818   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.077852   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.078209   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:41.078430   13355 start.go:128] duration metric: took 21.280308685s to createHost
	I0913 23:27:41.078523   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.080871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081492   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.081509   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081740   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.081948   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082106   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082226   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.082357   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:41.082590   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:41.082607   13355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:27:41.188427   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726270061.160461194
	
	I0913 23:27:41.188463   13355 fix.go:216] guest clock: 1726270061.160461194
	I0913 23:27:41.188474   13355 fix.go:229] Guest: 2024-09-13 23:27:41.160461194 +0000 UTC Remote: 2024-09-13 23:27:41.078444881 +0000 UTC m=+21.385670707 (delta=82.016313ms)
	I0913 23:27:41.188531   13355 fix.go:200] guest clock delta is within tolerance: 82.016313ms
	I0913 23:27:41.188539   13355 start.go:83] releasing machines lock for "addons-473197", held for 21.390482943s
	I0913 23:27:41.188568   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.188834   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.191630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192076   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.192098   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192320   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192816   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192990   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.193060   13355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:27:41.193115   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.193231   13355 ssh_runner.go:195] Run: cat /version.json
	I0913 23:27:41.193263   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.195906   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196214   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196366   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196541   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196670   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196705   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196706   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.196834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196880   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197034   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.197031   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.197160   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197329   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.272290   13355 ssh_runner.go:195] Run: systemctl --version
	I0913 23:27:41.309754   13355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:27:41.465120   13355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:27:41.470808   13355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:27:41.470872   13355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:41.486194   13355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:27:41.486219   13355 start.go:495] detecting cgroup driver to use...
	I0913 23:27:41.486277   13355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:27:41.501356   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:27:41.514148   13355 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:27:41.514201   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:27:41.526902   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:27:41.539813   13355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:27:41.653998   13355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:27:41.795256   13355 docker.go:233] disabling docker service ...
	I0913 23:27:41.795338   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:27:41.808732   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:27:41.820663   13355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:27:41.960800   13355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:27:42.071315   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:27:42.085863   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:42.104721   13355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:27:42.104778   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.115928   13355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:27:42.116006   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.126630   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.136692   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.146840   13355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:42.158680   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.169310   13355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.187197   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.197346   13355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:42.206456   13355 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:27:42.206517   13355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:27:42.218600   13355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:42.228617   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:42.336875   13355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:27:42.432370   13355 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:27:42.432459   13355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:27:42.436970   13355 start.go:563] Will wait 60s for crictl version
	I0913 23:27:42.437040   13355 ssh_runner.go:195] Run: which crictl
	I0913 23:27:42.440590   13355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:27:42.475674   13355 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:27:42.475820   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.501858   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.529367   13355 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:27:42.530946   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:42.533556   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.533907   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:42.533934   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.534104   13355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:42.537936   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:42.549881   13355 kubeadm.go:883] updating cluster {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:42.549978   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:42.550015   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:42.581270   13355 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 23:27:42.581333   13355 ssh_runner.go:195] Run: which lz4
	I0913 23:27:42.584936   13355 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 23:27:42.588777   13355 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 23:27:42.588812   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 23:27:43.814973   13355 crio.go:462] duration metric: took 1.230077023s to copy over tarball
	I0913 23:27:43.815032   13355 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 23:27:45.932346   13355 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117279223s)
	I0913 23:27:45.932374   13355 crio.go:469] duration metric: took 2.117376082s to extract the tarball
	I0913 23:27:45.932383   13355 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 23:27:45.968777   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:46.009560   13355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 23:27:46.009591   13355 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:27:46.009602   13355 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I0913 23:27:46.009706   13355 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-473197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:27:46.009801   13355 ssh_runner.go:195] Run: crio config
	I0913 23:27:46.058212   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:46.058233   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:46.058242   13355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:46.058265   13355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-473197 NodeName:addons-473197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:46.058390   13355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-473197"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:46.058449   13355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:46.067747   13355 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:27:46.067836   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:46.076323   13355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:27:46.091845   13355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:46.107011   13355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 23:27:46.122091   13355 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:46.125699   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:46.136584   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:46.243887   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:46.259537   13355 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197 for IP: 192.168.39.50
	I0913 23:27:46.259566   13355 certs.go:194] generating shared ca certs ...
	I0913 23:27:46.259587   13355 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.259827   13355 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:27:46.322225   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt ...
	I0913 23:27:46.322258   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt: {Name:mke46b90c0d6e2a0d22a599cb0925a94af7cb890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322470   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key ...
	I0913 23:27:46.322490   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key: {Name:mkeed16d615b1d7b45fa5c87fb359fe1941c704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322591   13355 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:27:46.462878   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt ...
	I0913 23:27:46.462907   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt: {Name:mk6b1da2351e5a548bbce01c78eb8ec03bbc9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463051   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key ...
	I0913 23:27:46.463061   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key: {Name:mk7ea15f150fb9588b92c5379cfdb24690c332b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463123   13355 certs.go:256] generating profile certs ...
	I0913 23:27:46.463171   13355 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key
	I0913 23:27:46.463184   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt with IP's: []
	I0913 23:27:46.657652   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt ...
	I0913 23:27:46.657686   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: {Name:mk5f50c2130cbf6a4ae973b8a645d8dcfcea5e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657857   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key ...
	I0913 23:27:46.657870   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key: {Name:mk3ec218d1db7592ee3144e8458afc6e59c3670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657934   13355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74
	I0913 23:27:46.657951   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0913 23:27:46.879416   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 ...
	I0913 23:27:46.879453   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74: {Name:mkcaab583500a609e501e4f9e7f67d24dbf8d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879638   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 ...
	I0913 23:27:46.879651   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74: {Name:mk892a816842ba211b137a4d62befccce1e5b073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879724   13355 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt
	I0913 23:27:46.879814   13355 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key
	I0913 23:27:46.879862   13355 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key
	I0913 23:27:46.879879   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt with IP's: []
	I0913 23:27:46.991498   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt ...
	I0913 23:27:46.991530   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt: {Name:mkb643e56ac833ce28178330ec7aa1dda3e56b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991685   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key ...
	I0913 23:27:46.991696   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key: {Name:mka2351863ee87552b80a1470ad4d30098e9cd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991874   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:27:46.991908   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:27:46.991933   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:46.991956   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:27:46.992518   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:47.019880   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:27:47.046183   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:47.074948   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:27:47.097532   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 23:27:47.121957   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:27:47.146163   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:47.170775   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 23:27:47.194281   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:47.217329   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:47.233678   13355 ssh_runner.go:195] Run: openssl version
	I0913 23:27:47.239354   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:47.249994   13355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254467   13355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254522   13355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.260224   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:47.270703   13355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:47.274594   13355 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:47.274645   13355 kubeadm.go:392] StartCluster: {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:47.274712   13355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 23:27:47.274753   13355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 23:27:47.309951   13355 cri.go:89] found id: ""
	I0913 23:27:47.310012   13355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:47.320386   13355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:47.330943   13355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:47.341759   13355 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:47.341780   13355 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:47.341834   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:47.351646   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:47.351717   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:47.361297   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:47.370696   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:47.370762   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:47.380638   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.389574   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:47.389643   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.398896   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:47.408606   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:47.408676   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:47.418572   13355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:27:47.479386   13355 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:47.479472   13355 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:47.586391   13355 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:47.586505   13355 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:47.586582   13355 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:47.595987   13355 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:47.760778   13355 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:47.760900   13355 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:47.760974   13355 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:47.761064   13355 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:47.820089   13355 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:47.938680   13355 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:48.078014   13355 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:48.155692   13355 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:48.155847   13355 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.397795   13355 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:48.397964   13355 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.511295   13355 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:48.569260   13355 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:48.662216   13355 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:48.662475   13355 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:48.761318   13355 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:49.204225   13355 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:49.285052   13355 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:49.530932   13355 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:49.596255   13355 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:49.596809   13355 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:49.599274   13355 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:49.601179   13355 out.go:235]   - Booting up control plane ...
	I0913 23:27:49.601276   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:49.601348   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:49.601425   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:49.616053   13355 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:49.622415   13355 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:49.622489   13355 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:49.742292   13355 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:49.742405   13355 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:50.257638   13355 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 516.207513ms
	I0913 23:27:50.257765   13355 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:56.254726   13355 kubeadm.go:310] [api-check] The API server is healthy after 6.001344082s
	I0913 23:27:56.266993   13355 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:56.292355   13355 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:56.323160   13355 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:56.323401   13355 kubeadm.go:310] [mark-control-plane] Marking the node addons-473197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:56.339238   13355 kubeadm.go:310] [bootstrap-token] Using token: 39ittl.8h26ubvfwyg116f4
	I0913 23:27:56.340707   13355 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:56.340853   13355 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:56.349574   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:56.357917   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:56.365875   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:56.370732   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:56.375167   13355 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:56.666388   13355 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:57.109792   13355 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:57.661157   13355 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:57.662033   13355 kubeadm.go:310] 
	I0913 23:27:57.662163   13355 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:57.662184   13355 kubeadm.go:310] 
	I0913 23:27:57.662303   13355 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:57.662326   13355 kubeadm.go:310] 
	I0913 23:27:57.662361   13355 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:57.662417   13355 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:57.662496   13355 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:57.662508   13355 kubeadm.go:310] 
	I0913 23:27:57.662586   13355 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:57.662598   13355 kubeadm.go:310] 
	I0913 23:27:57.662671   13355 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:57.662687   13355 kubeadm.go:310] 
	I0913 23:27:57.662760   13355 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:57.662855   13355 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:57.662958   13355 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:57.662976   13355 kubeadm.go:310] 
	I0913 23:27:57.663089   13355 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:57.663197   13355 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:57.663210   13355 kubeadm.go:310] 
	I0913 23:27:57.663318   13355 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663464   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0913 23:27:57.663493   13355 kubeadm.go:310] 	--control-plane 
	I0913 23:27:57.663502   13355 kubeadm.go:310] 
	I0913 23:27:57.663615   13355 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:57.663626   13355 kubeadm.go:310] 
	I0913 23:27:57.663737   13355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663903   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0913 23:27:57.665427   13355 kubeadm.go:310] W0913 23:27:47.456712     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665734   13355 kubeadm.go:310] W0913 23:27:47.457675     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665846   13355 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:57.665879   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:57.665892   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:57.667738   13355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:57.668898   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:57.681342   13355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:57.704842   13355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:57.704978   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:57.705001   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-473197 minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-473197 minikube.k8s.io/primary=true
	I0913 23:27:57.725824   13355 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:57.846283   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.347074   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.847401   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.346340   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.846585   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.346364   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.846560   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.347311   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.847237   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.346723   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.425605   13355 kubeadm.go:1113] duration metric: took 4.720714541s to wait for elevateKubeSystemPrivileges
	I0913 23:28:02.425645   13355 kubeadm.go:394] duration metric: took 15.151004151s to StartCluster
	I0913 23:28:02.425662   13355 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.425785   13355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:28:02.426125   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.426288   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:28:02.426308   13355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:28:02.426365   13355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:28:02.426474   13355 addons.go:69] Setting yakd=true in profile "addons-473197"
	I0913 23:28:02.426504   13355 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-473197"
	I0913 23:28:02.426508   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.426517   13355 addons.go:234] Setting addon yakd=true in "addons-473197"
	I0913 23:28:02.426521   13355 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-473197"
	I0913 23:28:02.426514   13355 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-473197"
	I0913 23:28:02.426549   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426556   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426559   13355 addons.go:69] Setting helm-tiller=true in profile "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon helm-tiller=true in "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:02.426596   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426597   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426602   13355 addons.go:69] Setting ingress=true in profile "addons-473197"
	I0913 23:28:02.426631   13355 addons.go:234] Setting addon ingress=true in "addons-473197"
	I0913 23:28:02.426669   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426477   13355 addons.go:69] Setting gcp-auth=true in profile "addons-473197"
	I0913 23:28:02.426731   13355 mustload.go:65] Loading cluster: addons-473197
	I0913 23:28:02.426862   13355 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-473197"
	I0913 23:28:02.426884   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-473197"
	I0913 23:28:02.426885   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.427037   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427060   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427061   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427085   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427129   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427152   13355 addons.go:69] Setting metrics-server=true in profile "addons-473197"
	I0913 23:28:02.427165   13355 addons.go:234] Setting addon metrics-server=true in "addons-473197"
	I0913 23:28:02.426553   13355 addons.go:69] Setting ingress-dns=true in profile "addons-473197"
	I0913 23:28:02.427179   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427190   13355 addons.go:69] Setting volcano=true in profile "addons-473197"
	I0913 23:28:02.427201   13355 addons.go:234] Setting addon volcano=true in "addons-473197"
	I0913 23:28:02.427211   13355 addons.go:69] Setting registry=true in profile "addons-473197"
	I0913 23:28:02.427221   13355 addons.go:234] Setting addon registry=true in "addons-473197"
	I0913 23:28:02.427222   13355 addons.go:69] Setting storage-provisioner=true in profile "addons-473197"
	I0913 23:28:02.427145   13355 addons.go:69] Setting inspektor-gadget=true in profile "addons-473197"
	I0913 23:28:02.427230   13355 addons.go:69] Setting volumesnapshots=true in profile "addons-473197"
	I0913 23:28:02.427235   13355 addons.go:234] Setting addon storage-provisioner=true in "addons-473197"
	I0913 23:28:02.427239   13355 addons.go:234] Setting addon volumesnapshots=true in "addons-473197"
	I0913 23:28:02.427241   13355 addons.go:234] Setting addon inspektor-gadget=true in "addons-473197"
	I0913 23:28:02.427179   13355 addons.go:234] Setting addon ingress-dns=true in "addons-473197"
	I0913 23:28:02.426496   13355 addons.go:69] Setting cloud-spanner=true in profile "addons-473197"
	I0913 23:28:02.427256   13355 addons.go:234] Setting addon cloud-spanner=true in "addons-473197"
	I0913 23:28:02.426488   13355 addons.go:69] Setting default-storageclass=true in profile "addons-473197"
	I0913 23:28:02.427269   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-473197"
	I0913 23:28:02.427330   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427431   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427455   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427463   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427473   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427490   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427456   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427570   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427595   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427628   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427709   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427731   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427821   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427840   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427846   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427873   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427975   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427998   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428068   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428114   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428139   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428165   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428185   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428215   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428298   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428659   13355 out.go:177] * Verifying Kubernetes components...
	I0913 23:28:02.430798   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:28:02.443895   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0913 23:28:02.447836   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0913 23:28:02.460485   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0913 23:28:02.460874   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.460923   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.462824   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.462871   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475879   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.475930   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475955   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476088   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476184   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476509   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476527   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476781   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476799   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476841   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476853   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476873   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477429   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.477470   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.477700   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477702   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.478295   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478318   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478339   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.478341   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.490076   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I0913 23:28:02.490970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.491812   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.491835   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.492304   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.492616   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I0913 23:28:02.493134   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.493820   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.493836   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.494007   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.494989   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0913 23:28:02.497252   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.498698   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.498751   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.499326   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.501553   13355 addons.go:234] Setting addon default-storageclass=true in "addons-473197"
	I0913 23:28:02.501602   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.501968   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.502005   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.502350   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.502365   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.502909   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.503277   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.506570   13355 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-473197"
	I0913 23:28:02.506622   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.507002   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.507046   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.514594   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0913 23:28:02.514782   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0913 23:28:02.515367   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.516584   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.516605   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.517040   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.517686   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.517727   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.518257   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.518363   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0913 23:28:02.519018   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.519037   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.519440   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.519657   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.522038   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0913 23:28:02.522040   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.522394   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.522435   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.522727   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.522970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.523316   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523334   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523520   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523532   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523938   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.524557   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.524603   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.525745   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0913 23:28:02.526425   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.526729   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0913 23:28:02.527167   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.527412   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.527429   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.527767   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.527985   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.528007   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.529141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.529182   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.529468   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.529539   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.530024   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.530070   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.531195   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0913 23:28:02.531769   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0913 23:28:02.532339   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.532382   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.532396   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.532869   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.532894   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.533439   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.533682   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0913 23:28:02.534092   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0913 23:28:02.539974   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0913 23:28:02.540404   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.541583   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.541602   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.541629   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0913 23:28:02.541958   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.542365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.544696   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.546879   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0913 23:28:02.547431   13355 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:28:02.548325   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0913 23:28:02.549791   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:28:02.549808   13355 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:28:02.549834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.552042   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0913 23:28:02.564110   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0913 23:28:02.564127   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.564132   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0913 23:28:02.564116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564213   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.564232   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564116   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0913 23:28:02.564383   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0913 23:28:02.564467   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564922   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.564951   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564962   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.565052   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.565067   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.565129   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565819   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.565933   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565964   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566027   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.566035   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566045   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.566054   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.566091   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566112   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566136   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566148   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566172   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0913 23:28:02.567152   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567167   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567256   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567262   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567340   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567349   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.567474   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567480   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567531   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567546   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567556   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.567609   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.567654   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567664   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567713   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567738   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567747   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567749   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567798   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567823   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567915   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567929   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.568085   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568102   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.568148   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568172   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568188   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568215   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568340   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568402   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568439   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.568464   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568519   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568665   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568699   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569269   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.569415   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.569426   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.569482   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.569514   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569816   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.570420   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.570455   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.570772   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.571101   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.571153   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.571923   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.571964   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.571931   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572189   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.572204   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.572331   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572395   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.572403   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573462   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573488   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573510   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.573528   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.574423   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:02.574434   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.574441   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573549   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.575002   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.575025   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.575042   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.577246   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:02.577336   13355 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 23:28:02.577577   13355 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0913 23:28:02.577709   13355 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:28:02.578460   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.578635   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:28:02.578647   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0913 23:28:02.578665   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579318   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.579608   13355 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:02.579851   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:28:02.579873   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579633   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:28:02.580938   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.580994   13355 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:28:02.582107   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:28:02.582192   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:28:02.582204   13355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:28:02.582234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.583277   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.583707   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.583726   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.584022   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.584195   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.584220   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 23:28:02.584401   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.584539   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.584948   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.585611   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:02.585633   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 23:28:02.585650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.585705   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:28:02.585820   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.585840   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586103   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.586338   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.586392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586488   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.586648   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.586902   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.586918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.587228   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.587417   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.587574   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.587716   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.588599   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:28:02.589562   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.589986   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.590012   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.590304   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.590503   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.590650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.590784   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.591169   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:28:02.592433   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:28:02.592982   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0913 23:28:02.593391   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.593880   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.593904   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.594231   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.594357   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:28:02.594365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.594869   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I0913 23:28:02.595307   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.595835   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.595857   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.596170   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0913 23:28:02.596327   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:28:02.596346   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.596551   13355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:02.596571   13355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:28:02.596587   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.596642   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.597297   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.597523   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.597839   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:28:02.597858   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:28:02.597882   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.598022   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.598046   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.598359   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.598480   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.600521   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.601746   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.601897   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602020   13355 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:28:02.602261   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602280   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602309   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602332   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602578   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602638   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602773   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602928   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.602925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.603038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.603319   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.603369   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.604203   13355 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:28:02.605136   13355 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:28:02.605271   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:02.605291   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:28:02.605308   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.605737   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0913 23:28:02.606098   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.606452   13355 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:02.606469   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:28:02.606483   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.606619   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.606637   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.606672   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0913 23:28:02.607023   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.607041   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.607206   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.607447   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.607462   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.608306   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.608506   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.609969   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610193   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610631   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.610650   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610800   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610936   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.611209   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.611513   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.611607   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.611708   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:28:02.611881   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.612359   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612853   13355 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 23:28:02.612890   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.612853   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:28:02.612935   13355 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:28:02.612955   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.613679   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.614172   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.614301   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.614358   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:02.614375   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 23:28:02.614391   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.616797   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617557   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617585   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617630   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.617685   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617710   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0913 23:28:02.617846   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.617859   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.618067   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.618131   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618188   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.618375   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.618533   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.618780   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618907   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.618920   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.619116   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.619427   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.619639   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.621163   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.623020   13355 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:28:02.624487   13355 out.go:177]   - Using image docker.io/registry:2.8.3
	W0913 23:28:02.625390   13355 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625420   13355 retry.go:31] will retry after 203.721913ms: ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625979   13355 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:28:02.625996   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:28:02.626020   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.626338   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0913 23:28:02.626915   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.628248   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.628278   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.628731   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.628951   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.629689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630408   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0913 23:28:02.630603   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.630607   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.630644   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630752   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.630897   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.630954   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.631042   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.631079   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.631402   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.631424   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.631759   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.632088   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.632893   13355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:28:02.633647   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.634372   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:02.634400   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:28:02.634419   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.634983   13355 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:28:02.635926   13355 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:28:02.635944   13355 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:28:02.635961   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.637913   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638299   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.638327   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638429   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638456   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.638653   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.638829   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639005   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.638906   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.639049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.639088   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.639199   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.639371   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639577   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:03.010874   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:03.011331   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:03.027301   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:28:03.027323   13355 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:28:03.067510   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:28:03.067570   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:28:03.088658   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:03.092881   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:03.096079   13355 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:28:03.096109   13355 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:28:03.118568   13355 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:28:03.118604   13355 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:28:03.151579   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:28:03.151606   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:28:03.163844   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:03.171501   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:28:03.171531   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0913 23:28:03.174545   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:28:03.174571   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:28:03.212903   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:03.223453   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:03.228572   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:28:03.228604   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:28:03.250777   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:28:03.250803   13355 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:28:03.279463   13355 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:28:03.279488   13355 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:28:03.302426   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:28:03.302459   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:28:03.319332   13355 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.319353   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:28:03.330057   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.330085   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0913 23:28:03.407024   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:28:03.407056   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:28:03.440023   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:28:03.440055   13355 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:28:03.479290   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:28:03.479317   13355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:28:03.491399   13355 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:28:03.491426   13355 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:28:03.520500   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.531329   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:28:03.531360   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:28:03.560362   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.703012   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.703042   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:28:03.713271   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:28:03.713301   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:28:03.714632   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.714653   13355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:28:03.719658   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:28:03.719678   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:28:03.737269   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:28:03.737304   13355 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:28:03.889071   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.908115   13355 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:03.908155   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:28:03.918960   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.941219   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:28:03.941249   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:28:03.994232   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:28:03.994259   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:28:04.229209   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:04.267554   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:28:04.267577   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:28:04.330516   13355 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:28:04.330552   13355 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:28:04.536905   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:28:04.536936   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:28:04.590128   13355 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.590152   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:28:04.788803   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.816897   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:28:04.816931   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:28:05.234442   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:28:05.234478   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:28:05.583587   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:28:05.583614   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:28:05.923679   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:05.923710   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:28:06.123490   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.112567467s)
	I0913 23:28:06.123547   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123557   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.123855   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.123869   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.123883   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123892   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.124216   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.124238   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.363736   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:07.633977   13355 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.566365758s)
	I0913 23:28:07.634011   13355 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 23:28:07.634023   13355 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.566476402s)
	I0913 23:28:07.634039   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.622680865s)
	I0913 23:28:07.634089   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634105   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634380   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634436   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.634448   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634455   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634784   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634856   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634890   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.635047   13355 node_ready.go:35] waiting up to 6m0s for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650081   13355 node_ready.go:49] node "addons-473197" has status "Ready":"True"
	I0913 23:28:07.650107   13355 node_ready.go:38] duration metric: took 15.042078ms for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650117   13355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:07.696618   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:07.988840   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.900143589s)
	I0913 23:28:07.988889   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988902   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988909   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.895998713s)
	I0913 23:28:07.988947   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988962   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988991   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.825104432s)
	I0913 23:28:07.989064   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.776127396s)
	I0913 23:28:07.989142   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989163   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989177   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989178   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989192   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989202   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989069   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989500   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989274   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989532   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989541   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989547   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989777   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989817   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989833   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989842   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989843   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989850   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989854   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989856   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989864   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989280   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990285   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990340   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990363   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990372   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989408   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990392   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.990409   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990434   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990442   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989433   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.992583   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.992598   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.078646   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.078674   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.079091   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.079153   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.079168   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:08.079276   13355 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0913 23:28:08.086087   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.086136   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.086492   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.086562   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.086620   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.150438   13355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-473197" context rescaled to 1 replicas
	I0913 23:28:08.748384   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.748408   13355 pod_ready.go:82] duration metric: took 1.05175792s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.748418   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799453   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.799484   13355 pod_ready.go:82] duration metric: took 51.058777ms for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799510   13355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874578   13355 pod_ready.go:93] pod "etcd-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.874605   13355 pod_ready.go:82] duration metric: took 75.087265ms for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874616   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.604747   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:28:09.604789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:09.608703   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609227   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:09.609263   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609479   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:09.609669   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:09.609849   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:09.610002   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:09.882148   13355 pod_ready.go:93] pod "kube-apiserver-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.882180   13355 pod_ready.go:82] duration metric: took 1.007556164s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.882192   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894451   13355 pod_ready.go:93] pod "kube-controller-manager-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.894497   13355 pod_ready.go:82] duration metric: took 12.295374ms for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894514   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038855   13355 pod_ready.go:93] pod "kube-proxy-vg8p5" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.038887   13355 pod_ready.go:82] duration metric: took 144.362352ms for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038901   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.156523   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:28:10.274748   13355 addons.go:234] Setting addon gcp-auth=true in "addons-473197"
	I0913 23:28:10.274811   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:10.275129   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.275181   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.290032   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0913 23:28:10.290544   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.291078   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.291104   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.291475   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.292074   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.292121   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.306929   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0913 23:28:10.307597   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.308136   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.308165   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.308479   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.308653   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:10.310373   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:10.310613   13355 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:28:10.310635   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:10.313460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.313874   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:10.313918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.314081   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:10.314245   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:10.314388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:10.314538   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:10.441154   13355 pod_ready.go:93] pod "kube-scheduler-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.441189   13355 pod_ready.go:82] duration metric: took 402.279342ms for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.441203   13355 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:11.038273   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.814781844s)
	I0913 23:28:11.038325   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038338   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038351   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.517814291s)
	I0913 23:28:11.038392   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038411   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038417   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.478018749s)
	I0913 23:28:11.038450   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038462   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038481   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.149383482s)
	I0913 23:28:11.038503   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038527   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.119530303s)
	I0913 23:28:11.038556   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038571   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038518   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038634   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.809394559s)
	W0913 23:28:11.038660   13355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038679   13355 retry.go:31] will retry after 183.620302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038717   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.249875908s)
	I0913 23:28:11.038739   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038748   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038848   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038862   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038871   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038865   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038888   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038899   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038910   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038878   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039010   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039031   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039036   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039057   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039069   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039122   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039149   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039160   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039167   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039166   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039204   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039214   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039133   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039231   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039239   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039245   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039016   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039310   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039467   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039314   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039385   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039222   13355 addons.go:475] Verifying addon ingress=true in "addons-473197"
	I0913 23:28:11.039415   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.040400   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.040410   13355 addons.go:475] Verifying addon metrics-server=true in "addons-473197"
	I0913 23:28:11.041432   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.041448   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.041458   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.041473   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.041801   13355 out.go:177] * Verifying ingress addon...
	I0913 23:28:11.042190   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042207   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042216   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.042219   13355 addons.go:475] Verifying addon registry=true in "addons-473197"
	I0913 23:28:11.042423   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042430   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042439   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042443   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042448   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.043752   13355 out.go:177] * Verifying registry addon...
	I0913 23:28:11.043754   13355 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-473197 service yakd-dashboard -n yakd-dashboard
	
	I0913 23:28:11.044788   13355 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 23:28:11.046424   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:28:11.081206   13355 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:28:11.081236   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.081287   13355 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 23:28:11.081297   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.223004   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:11.561874   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.562467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.057966   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.058896   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.469345   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:12.561195   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.600083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.619662   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.255869457s)
	I0913 23:28:12.619725   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619738   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.619748   13355 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.309112473s)
	I0913 23:28:12.619902   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396587931s)
	I0913 23:28:12.619956   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619976   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620101   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620159   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620169   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620183   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620191   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620194   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620202   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620223   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620426   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620437   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620437   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620447   13355 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:12.620532   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620512   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620564   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.623355   13355 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:28:12.623358   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:12.625412   13355 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:28:12.626098   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:28:12.626980   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:28:12.627005   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:28:12.634155   13355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:28:12.634185   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.701404   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:28:12.701431   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:28:12.784012   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:12.784039   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:28:12.826052   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:13.050608   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.054294   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.131996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.549130   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.550698   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.654447   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.954168   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.128064006s)
	I0913 23:28:13.954227   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954246   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954502   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954524   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.954543   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954551   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954561   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954804   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954864   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954887   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.956609   13355 addons.go:475] Verifying addon gcp-auth=true in "addons-473197"
	I0913 23:28:13.958261   13355 out.go:177] * Verifying gcp-auth addon...
	I0913 23:28:13.960562   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:28:14.052223   13355 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:28:14.052254   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.137186   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.137455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.211253   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.466086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.550740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.552353   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.633397   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.950640   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:14.966723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.066865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.067365   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.131415   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.466378   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.549510   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.552396   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.632635   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.964956   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.054146   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.131263   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.464327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.549627   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.553008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.632296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.225129   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:17.225473   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.225716   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.226083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.226210   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.464982   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.550258   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.550361   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.630780   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.964491   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.049246   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.050330   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.131607   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.464703   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.549790   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.550896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.631297   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.965276   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.051294   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.131697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.447973   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:19.464571   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.550276   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.551651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.631103   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.964917   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.049683   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.050503   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.130574   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.464865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.550041   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.551487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.631097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.969748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.069252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.069792   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.132416   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.452205   13355 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:21.452227   13355 pod_ready.go:82] duration metric: took 11.011016466s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:21.452243   13355 pod_ready.go:39] duration metric: took 13.802114071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:21.452257   13355 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:21.452309   13355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:21.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.469459   13355 api_server.go:72] duration metric: took 19.043113394s to wait for apiserver process to appear ...
	I0913 23:28:21.469484   13355 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:21.469502   13355 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0913 23:28:21.474255   13355 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0913 23:28:21.475191   13355 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:21.475215   13355 api_server.go:131] duration metric: took 5.722944ms to wait for apiserver health ...
	I0913 23:28:21.475222   13355 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:21.482377   13355 system_pods.go:59] 18 kube-system pods found
	I0913 23:28:21.482406   13355 system_pods.go:61] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.482416   13355 system_pods.go:61] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.482422   13355 system_pods.go:61] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.482432   13355 system_pods.go:61] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.482439   13355 system_pods.go:61] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.482445   13355 system_pods.go:61] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.482450   13355 system_pods.go:61] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.482461   13355 system_pods.go:61] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.482472   13355 system_pods.go:61] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.482478   13355 system_pods.go:61] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.482484   13355 system_pods.go:61] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.482500   13355 system_pods.go:61] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.482510   13355 system_pods.go:61] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.482517   13355 system_pods.go:61] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.482524   13355 system_pods.go:61] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482532   13355 system_pods.go:61] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482537   13355 system_pods.go:61] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.482547   13355 system_pods.go:61] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.482560   13355 system_pods.go:74] duration metric: took 7.331476ms to wait for pod list to return data ...
	I0913 23:28:21.482573   13355 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:21.484999   13355 default_sa.go:45] found service account: "default"
	I0913 23:28:21.485018   13355 default_sa.go:55] duration metric: took 2.439792ms for default service account to be created ...
	I0913 23:28:21.485024   13355 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:21.492239   13355 system_pods.go:86] 18 kube-system pods found
	I0913 23:28:21.492270   13355 system_pods.go:89] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.492278   13355 system_pods.go:89] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.492304   13355 system_pods.go:89] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.492313   13355 system_pods.go:89] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.492317   13355 system_pods.go:89] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.492322   13355 system_pods.go:89] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.492326   13355 system_pods.go:89] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.492332   13355 system_pods.go:89] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.492336   13355 system_pods.go:89] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.492345   13355 system_pods.go:89] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.492354   13355 system_pods.go:89] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.492361   13355 system_pods.go:89] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.492367   13355 system_pods.go:89] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.492375   13355 system_pods.go:89] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.492382   13355 system_pods.go:89] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492387   13355 system_pods.go:89] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492391   13355 system_pods.go:89] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.492399   13355 system_pods.go:89] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.492407   13355 system_pods.go:126] duration metric: took 7.377814ms to wait for k8s-apps to be running ...
	I0913 23:28:21.492417   13355 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:21.492462   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:21.506589   13355 system_svc.go:56] duration metric: took 14.16145ms WaitForService to wait for kubelet
	I0913 23:28:21.506620   13355 kubeadm.go:582] duration metric: took 19.080279709s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:21.506641   13355 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:21.509697   13355 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:28:21.509728   13355 node_conditions.go:123] node cpu capacity is 2
	I0913 23:28:21.509740   13355 node_conditions.go:105] duration metric: took 3.093718ms to run NodePressure ...
	I0913 23:28:21.509750   13355 start.go:241] waiting for startup goroutines ...
	I0913 23:28:21.549269   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.549838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.630759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.964996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.066659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.066988   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.130457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.464269   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.550603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.551392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.631480   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.049834   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.050736   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.133507   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.464509   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.549382   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.552128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.631843   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.965613   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.049624   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.050338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.131212   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.464759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.549437   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.551097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.630910   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.964175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.048277   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.050045   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.131365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.977617   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.978628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.978709   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.979158   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.981429   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.049520   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.051681   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.130220   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.464159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.549552   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.551222   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.631176   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.050910   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.052011   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.132349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.464810   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.549257   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.550786   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.630897   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.964079   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.050122   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.050142   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.151036   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.464673   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.549691   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.549874   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.630545   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.963838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.049223   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.051589   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.131701   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.464227   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.549018   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.552460   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.631494   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.066437   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.066971   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.132136   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.464961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.549367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.550784   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.631748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.964913   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.051008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.051249   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.130779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.464391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.551575   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.552105   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.631630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.965632   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.101759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.101841   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.464572   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.549356   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.550906   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.633073   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.964216   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.048975   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.050916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.131112   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.463822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.549425   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.550516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.630393   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.964336   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.048857   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.050443   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.151118   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.465096   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.549740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.550620   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.631086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.966455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.049659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.050047   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.131495   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.465132   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.548766   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.550376   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.631577   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.964286   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.049062   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.050210   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.131543   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.464275   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.548452   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.550456   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.631360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.963688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.049820   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.050743   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.130637   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.464113   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.549304   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.550688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.631192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.963973   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.051608   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.051727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.133034   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.464549   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.559078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.559213   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.631291   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.050741   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.051159   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.131060   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.464822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.549844   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.550291   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.630944   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.965248   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.048824   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.050349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.131327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.464279   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.549628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.550481   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.630731   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.964314   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.048937   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.050618   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.130605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.464689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.549726   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.550735   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.630990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.964388   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.048950   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.050795   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.131078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.464031   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.550212   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.551605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.631901   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.965017   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.049775   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.050581   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.131657   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.464727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.550289   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.550580   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.630961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.965047   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.048962   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.050171   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.131175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.463892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.565475   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.565612   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.632466   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.049299   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.050431   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.134055   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.463841   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.550749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.550792   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.631218   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.964803   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.049789   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.050384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.131201   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.465262   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.554496   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.555890   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.631739   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.963850   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.049818   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.051135   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.134195   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.465246   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.549517   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.550721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.633663   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.964089   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.049632   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.050325   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.131567   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.466199   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.549697   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.550894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.632690   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.964192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.049080   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.131986   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.464641   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.552164   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.554375   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.631764   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.965086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.049392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.050669   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.131492   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.464328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.549524   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.550434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.631322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.964441   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.049783   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.055312   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.131190   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.464922   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.550169   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.550221   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.631339   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.964457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.049661   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.051864   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.132038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.582166   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.583770   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.584179   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.630661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.049046   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.131202   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.464541   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.549549   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.551453   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.630606   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.964993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.050779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.051367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.131038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.464444   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.549153   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.551452   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.848826   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.964836   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.050095   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.050302   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.131159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.464360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.564936   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.565447   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.666242   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.964847   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.049829   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.051453   13355 kapi.go:107] duration metric: took 45.005028778s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:56.131651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.464265   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.549020   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.630993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.964711   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.049527   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.132133   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.464568   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.550287   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.631088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.965832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.066601   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.131348   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.464693   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.551166   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.632041   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.965180   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.066338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.131515   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.463658   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.548973   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.630391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.964296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.049386   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.130469   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.463737   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.549776   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.717623   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.049274   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.131153   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:01.463888   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.549890   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.631219   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.255077   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.255610   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.255728   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.474419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.574193   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.630689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.964630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.049565   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.131380   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.464744   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.549449   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.630833   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.965101   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.048562   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.131484   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.466051   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.568692   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.668110   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.967488   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.049862   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.132252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.464896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.549994   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.630434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.964526   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.065548   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.166487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.464128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.549947   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.631713   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.963955   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.049715   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.130974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.550454   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.630666   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.967197   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.068388   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.168815   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.464599   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.550992   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.630627   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.966766   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.053073   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.130730   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.465025   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.567230   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.630516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.965721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.054440   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.130768   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:10.464306   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.548749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.631327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.276930   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.277860   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.279328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.471697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.582335   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.674829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.965501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.048830   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.130570   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.466419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.553795   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.631061   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.964723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.051802   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.129998   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.465020   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.566946   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.632019   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.969250   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.050082   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.130824   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.464827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.565739   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.629990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.974680   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.049645   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.130802   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.464723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.567052   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.631421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.964586   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.049406   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.130916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.465274   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.548963   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.630852   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.964129   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.048736   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.131304   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.465372   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.549339   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.631400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.964595   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.048825   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.130668   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.463994   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.550503   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.632529   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.978043   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.049954   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.131952   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.464512   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.551136   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.632160   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.964960   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.242123   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.242829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.465827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.550268   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.633322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.964413   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.049949   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.132854   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.671555   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.673400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.673957   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.050196   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.130368   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.464308   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.549420   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.630664   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.963895   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.049709   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.150900   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.464457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.548815   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.631125   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.976832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.078240   13355 kapi.go:107] duration metric: took 1m13.033450728s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 23:29:24.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:24.464968   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.118892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.121603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.131661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.464273   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.631894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.964763   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.130778   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.465365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.630404   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.963974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.131493   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.464501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.632858   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.963992   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.132535   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.464106   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.633421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.969206   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.132088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.466471   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.631809   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.966539   13355 kapi.go:107] duration metric: took 1m16.005977096s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:29:29.967938   13355 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-473197 cluster.
	I0913 23:29:29.969110   13355 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:29:29.970285   13355 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:29:30.131386   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:30.632192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:31.132279   13355 kapi.go:107] duration metric: took 1m18.506177888s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:29:31.134114   13355 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 23:29:31.135471   13355 addons.go:510] duration metric: took 1m28.709101641s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 23:29:31.135518   13355 start.go:246] waiting for cluster config update ...
	I0913 23:29:31.135543   13355 start.go:255] writing updated cluster config ...
	I0913 23:29:31.135825   13355 ssh_runner.go:195] Run: rm -f paused
	I0913 23:29:31.187868   13355 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:29:31.189865   13355 out.go:177] * Done! kubectl is now configured to use "addons-473197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.768563597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270874768532288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24b1dcd9-0af2-426e-bcae-5822164ca72e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.769070961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=018855d7-f439-4360-af0d-fb0238f7f111 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.769185948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=018855d7-f439-4360-af0d-fb0238f7f111 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.769544793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73
f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a3
6413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,
PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe0
56e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5ab
b05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Met
adata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=018855d7-f439-4360-af0d-fb0238f7f111 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.807067786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b292567d-bb9f-4d85-ab74-4a4750ab8132 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.807185748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b292567d-bb9f-4d85-ab74-4a4750ab8132 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.808332998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0696b7a5-4257-45ba-8533-95259575f700 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.809531485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270874809503869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0696b7a5-4257-45ba-8533-95259575f700 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.810211294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb71a67e-290a-4d80-b7d2-d689fb0c7cd6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.810286301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb71a67e-290a-4d80-b7d2-d689fb0c7cd6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.810627565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73
f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a3
6413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,
PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe0
56e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5ab
b05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Met
adata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb71a67e-290a-4d80-b7d2-d689fb0c7cd6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.844520147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd1f086b-e0bb-4b41-839e-0283e6f9a20e name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.844612436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd1f086b-e0bb-4b41-839e-0283e6f9a20e name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.845782615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17f44a7c-db2f-450e-af7a-9cc26adf637e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.847256169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270874847227067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17f44a7c-db2f-450e-af7a-9cc26adf637e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.847963295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4dc3c51-e9fc-4128-9574-e8d576b24c33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.848032797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4dc3c51-e9fc-4128-9574-e8d576b24c33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.848417443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73
f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a3
6413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,
PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe0
56e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5ab
b05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Met
adata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4dc3c51-e9fc-4128-9574-e8d576b24c33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.887189771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48549aa7-e53b-40d9-abe0-6de35fa7e10a name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.887291053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48549aa7-e53b-40d9-abe0-6de35fa7e10a name=/runtime.v1.RuntimeService/Version
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.888399177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cb74933-34fa-4f73-909e-32b93bf02475 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.889679037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270874889650438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cb74933-34fa-4f73-909e-32b93bf02475 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.890311395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87d1b2d5-0033-43e1-8ac5-e6d65a7d7bd6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.890382863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87d1b2d5-0033-43e1-8ac5-e6d65a7d7bd6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:41:14 addons-473197 crio[661]: time="2024-09-13 23:41:14.890700324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fedfdc0baddcb6b3d89c131ad0be8013db7a0e7e1d0462eb388559e2de82d6d4,PodSandboxId:52c57a20ae26165a4a28d6fd69c44744f7c20c096bc539fc994934d5cf96c78c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726270146068923852,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bhhr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0961900f-ad3b-4819-9cd0-dd2af3ec16ee,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603c43ae8b4f55bc63896446147084f932829a6e8956dec6e76436b9930b03b5,PodSandboxId:248b6de52c5864dced4f69d016cfb279056d05d0ed101799a7706740abad1d11,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726270145913679207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nw7k5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38ad7da1-a367-4515-a20c-f6a699a7b7b8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73
f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a3
6413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,
PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b8063777371005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe0
56e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5ab
b05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Met
adata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87d1b2d5-0033-43e1-8ac5-e6d65a7d7bd6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f66e484f0bcf3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   a2fbf4073cbb5       hello-world-app-55bf9c44b4-jwks5
	97beb09dce981       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   e4292583c4fab       nginx
	038624c91b1cd       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   2a8766ca0210c       headlamp-57fb76fcdb-z5dzh
	5196a5dc9c17b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   d580ec2a88560       gcp-auth-89d5ffd79-74znl
	fedfdc0baddcb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   52c57a20ae261       ingress-nginx-admission-patch-5bhhr
	603c43ae8b4f5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   248b6de52c586       ingress-nginx-admission-create-nw7k5
	04e992df68051       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   dc9bf0b998e05       metrics-server-84c5f94fbc-2rwbq
	bd8804d28cfdd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             12 minutes ago      Running             local-path-provisioner    0                   458dcb49d1f7b       local-path-provisioner-86d989889c-5c8rt
	c9b12f34bf4ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   b5e0a2e4aa643       storage-provisioner
	d89a21338611a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   a8e55428ab347       coredns-7c65d6cfc9-kx4xn
	83331cb3777f3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   f7778cd3a139f       kube-proxy-vg8p5
	04477f2de3ed2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   c0df35fa7a533       etcd-addons-473197
	56e77d112c7cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   6a9771749e8e5       kube-apiserver-addons-473197
	6d8bc098317b8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   555adaf092a3a       kube-scheduler-addons-473197
	5654029eb497f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   16df8b6062c13       kube-controller-manager-addons-473197
	
	
	==> coredns [d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7] <==
	[INFO] 127.0.0.1:45670 - 7126 "HINFO IN 5243104806893607912.7915310536040454133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013008283s
	[INFO] 10.244.0.7:35063 - 39937 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000380883s
	[INFO] 10.244.0.7:35063 - 43782 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014847s
	[INFO] 10.244.0.7:57829 - 35566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163865s
	[INFO] 10.244.0.7:57829 - 30448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000958s
	[INFO] 10.244.0.7:39015 - 39866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132201s
	[INFO] 10.244.0.7:39015 - 60863 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107562s
	[INFO] 10.244.0.7:58981 - 30723 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162373s
	[INFO] 10.244.0.7:58981 - 46338 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00022074s
	[INFO] 10.244.0.7:42427 - 30557 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119811s
	[INFO] 10.244.0.7:42427 - 64858 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194198s
	[INFO] 10.244.0.7:47702 - 27656 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006553s
	[INFO] 10.244.0.7:47702 - 4878 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042687s
	[INFO] 10.244.0.7:44162 - 12670 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051358s
	[INFO] 10.244.0.7:44162 - 55416 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106292s
	[INFO] 10.244.0.7:42573 - 35758 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040379s
	[INFO] 10.244.0.7:42573 - 45232 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000289788s
	[INFO] 10.244.0.22:35446 - 19101 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000568711s
	[INFO] 10.244.0.22:46347 - 39209 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000700369s
	[INFO] 10.244.0.22:55127 - 33729 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167148s
	[INFO] 10.244.0.22:59606 - 29197 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000295747s
	[INFO] 10.244.0.22:59298 - 45525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000336329s
	[INFO] 10.244.0.22:46438 - 8493 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150611s
	[INFO] 10.244.0.22:45134 - 55606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995828s
	[INFO] 10.244.0.22:56372 - 20336 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001287124s
	
	
	==> describe nodes <==
	Name:               addons-473197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-473197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-473197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-473197
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-473197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:41:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:38:59 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:38:59 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:38:59 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:38:59 +0000   Fri, 13 Sep 2024 23:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-473197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a5e8d89e8ad43a6a8c642064226a573
	  System UUID:                2a5e8d89-e8ad-43a6-a8c6-42064226a573
	  Boot ID:                    f73ad719-e78b-4b75-b596-4b22311bf8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-jwks5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  gcp-auth                    gcp-auth-89d5ffd79-74znl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  headlamp                    headlamp-57fb76fcdb-z5dzh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 coredns-7c65d6cfc9-kx4xn                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-473197                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-473197               250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-473197      200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vg8p5                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-473197               100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-2rwbq            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-5c8rt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-473197 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-473197 event: Registered Node addons-473197 in Controller
	  Normal  CIDRAssignmentFailed     13m                cidrAllocator    Node addons-473197 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[ +11.767727] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.860640] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 23:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.350538] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.114970] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.355485] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.753980] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.472896] kauditd_printk_skb: 14 callbacks suppressed
	[ +24.455652] kauditd_printk_skb: 32 callbacks suppressed
	[Sep13 23:30] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:37] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.847903] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.069379] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.043967] kauditd_printk_skb: 10 callbacks suppressed
	[Sep13 23:38] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.853283] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.843077] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.344633] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.164878] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.255016] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.389323] kauditd_printk_skb: 19 callbacks suppressed
	[Sep13 23:41] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.399611] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4] <==
	{"level":"info","ts":"2024-09-13T23:37:51.889897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1557}
	{"level":"info","ts":"2024-09-13T23:37:51.938422Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1557,"took":"47.925054ms","hash":4240063649,"current-db-size-bytes":6725632,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3678208,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-09-13T23:37:51.938492Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4240063649,"revision":1557,"compact-revision":-1}
	{"level":"info","ts":"2024-09-13T23:37:56.077320Z","caller":"traceutil/trace.go:171","msg":"trace[2140372478] linearizableReadLoop","detail":"{readStateIndex:2209; appliedIndex:2208; }","duration":"247.359784ms","start":"2024-09-13T23:37:55.829919Z","end":"2024-09-13T23:37:56.077279Z","steps":["trace[2140372478] 'read index received'  (duration: 247.248443ms)","trace[2140372478] 'applied index is now lower than readState.Index'  (duration: 110.59µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:37:56.077451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.493847ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:37:56.077484Z","caller":"traceutil/trace.go:171","msg":"trace[388607265] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2064; }","duration":"247.558448ms","start":"2024-09-13T23:37:55.829913Z","end":"2024-09-13T23:37:56.077472Z","steps":["trace[388607265] 'agreement among raft nodes before linearized reading'  (duration: 247.477707ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:56.077636Z","caller":"traceutil/trace.go:171","msg":"trace[772616711] transaction","detail":"{read_only:false; response_revision:2064; number_of_response:1; }","duration":"342.562437ms","start":"2024-09-13T23:37:55.735053Z","end":"2024-09-13T23:37:56.077616Z","steps":["trace[772616711] 'process raft request'  (duration: 342.117628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:37:56.077806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:37:55.735020Z","time spent":"342.655019ms","remote":"127.0.0.1:53072","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-13T23:38:26.271493Z","caller":"traceutil/trace.go:171","msg":"trace[2131628066] linearizableReadLoop","detail":"{readStateIndex:2494; appliedIndex:2493; }","duration":"108.567306ms","start":"2024-09-13T23:38:26.162913Z","end":"2024-09-13T23:38:26.271481Z","steps":["trace[2131628066] 'read index received'  (duration: 108.433015ms)","trace[2131628066] 'applied index is now lower than readState.Index'  (duration: 133.742µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:38:26.271587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.679598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:26.271607Z","caller":"traceutil/trace.go:171","msg":"trace[907710598] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-attacher; range_end:; response_count:0; response_revision:2337; }","duration":"108.715806ms","start":"2024-09-13T23:38:26.162886Z","end":"2024-09-13T23:38:26.271602Z","steps":["trace[907710598] 'agreement among raft nodes before linearized reading'  (duration: 108.663744ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:26.271800Z","caller":"traceutil/trace.go:171","msg":"trace[1022302903] transaction","detail":"{read_only:false; response_revision:2337; number_of_response:1; }","duration":"163.9076ms","start":"2024-09-13T23:38:26.107885Z","end":"2024-09-13T23:38:26.271793Z","steps":["trace[1022302903] 'process raft request'  (duration: 163.492838ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:33.012093Z","caller":"traceutil/trace.go:171","msg":"trace[1999613905] linearizableReadLoop","detail":"{readStateIndex:2536; appliedIndex:2535; }","duration":"332.084954ms","start":"2024-09-13T23:38:32.679984Z","end":"2024-09-13T23:38:33.012069Z","steps":["trace[1999613905] 'read index received'  (duration: 331.823648ms)","trace[1999613905] 'applied index is now lower than readState.Index'  (duration: 260.868µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T23:38:33.012294Z","caller":"traceutil/trace.go:171","msg":"trace[1261167858] transaction","detail":"{read_only:false; response_revision:2376; number_of_response:1; }","duration":"410.968582ms","start":"2024-09-13T23:38:32.601315Z","end":"2024-09-13T23:38:33.012284Z","steps":["trace[1261167858] 'process raft request'  (duration: 410.572548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.21368ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012477Z","caller":"traceutil/trace.go:171","msg":"trace[420653707] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2376; }","duration":"182.278804ms","start":"2024-09-13T23:38:32.830178Z","end":"2024-09-13T23:38:33.012457Z","steps":["trace[420653707] 'agreement among raft nodes before linearized reading'  (duration: 182.193813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.601291Z","time spent":"411.032114ms","remote":"127.0.0.1:52964","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2374 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-13T23:38:33.012625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.637201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012648Z","caller":"traceutil/trace.go:171","msg":"trace[1394512193] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2376; }","duration":"332.657728ms","start":"2024-09-13T23:38:32.679980Z","end":"2024-09-13T23:38:33.012638Z","steps":["trace[1394512193] 'agreement among raft nodes before linearized reading'  (duration: 332.619348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.679948Z","time spent":"332.7162ms","remote":"127.0.0.1:52786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-13T23:39:03.932214Z","caller":"traceutil/trace.go:171","msg":"trace[187426515] linearizableReadLoop","detail":"{readStateIndex:2670; appliedIndex:2669; }","duration":"102.345322ms","start":"2024-09-13T23:39:03.829836Z","end":"2024-09-13T23:39:03.932181Z","steps":["trace[187426515] 'read index received'  (duration: 102.066171ms)","trace[187426515] 'applied index is now lower than readState.Index'  (duration: 278.498µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:39:03.932347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.485385ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:39:03.932376Z","caller":"traceutil/trace.go:171","msg":"trace[1119183525] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2503; }","duration":"102.540076ms","start":"2024-09-13T23:39:03.829827Z","end":"2024-09-13T23:39:03.932367Z","steps":["trace[1119183525] 'agreement among raft nodes before linearized reading'  (duration: 102.470259ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:39:03.932531Z","caller":"traceutil/trace.go:171","msg":"trace[1350711930] transaction","detail":"{read_only:false; response_revision:2503; number_of_response:1; }","duration":"117.262386ms","start":"2024-09-13T23:39:03.815262Z","end":"2024-09-13T23:39:03.932524Z","steps":["trace[1350711930] 'process raft request'  (duration: 116.68468ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:39:19.058092Z","caller":"traceutil/trace.go:171","msg":"trace[1959501530] transaction","detail":"{read_only:false; response_revision:2517; number_of_response:1; }","duration":"107.407476ms","start":"2024-09-13T23:39:18.950665Z","end":"2024-09-13T23:39:19.058072Z","steps":["trace[1959501530] 'process raft request'  (duration: 107.288226ms)"],"step_count":1}
	
	
	==> gcp-auth [5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299] <==
	2024/09/13 23:29:31 Ready to write response ...
	2024/09/13 23:37:39 Ready to marshal response ...
	2024/09/13 23:37:39 Ready to write response ...
	2024/09/13 23:37:45 Ready to marshal response ...
	2024/09/13 23:37:45 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:47 Ready to marshal response ...
	2024/09/13 23:37:47 Ready to write response ...
	2024/09/13 23:38:00 Ready to marshal response ...
	2024/09/13 23:38:00 Ready to write response ...
	2024/09/13 23:38:11 Ready to marshal response ...
	2024/09/13 23:38:11 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:40 Ready to marshal response ...
	2024/09/13 23:38:40 Ready to write response ...
	2024/09/13 23:41:04 Ready to marshal response ...
	2024/09/13 23:41:04 Ready to write response ...
	
	
	==> kernel <==
	 23:41:15 up 13 min,  0 users,  load average: 0.15, 0.41, 0.48
	Linux addons-473197 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a] <==
	 > logger="UnhandledError"
	E0913 23:29:58.924676       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.102.69:443: connect: connection refused" logger="UnhandledError"
	E0913 23:29:58.955657       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 23:29:58.960975       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0913 23:38:03.472937       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0913 23:38:24.878230       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.178.54"}
	I0913 23:38:28.146928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.146968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.188882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.188920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.207934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.207989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.290379       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.290409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.311424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.311452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 23:38:29.290607       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0913 23:38:29.311717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 23:38:29.343244       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0913 23:38:35.108228       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 23:38:36.237049       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 23:38:40.565186       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 23:38:40.744026       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.251.250"}
	I0913 23:41:04.489776       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.154.138"}
	E0913 23:41:06.929046       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd] <==
	E0913 23:39:44.201095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:00.229510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:00.229647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:01.250518       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:01.250675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:02.990928       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:02.991046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:18.786024       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:18.786142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:33.044354       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:33.044461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:41.926453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:41.926660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:56.540603       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:56.540665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:41:03.306814       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:03.306933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:41:04.349283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="80.404792ms"
	I0913 23:41:04.367487       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.048805ms"
	I0913 23:41:04.367756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="83.924µs"
	I0913 23:41:06.821547       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 23:41:06.832235       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0913 23:41:06.833258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.351µs"
	I0913 23:41:08.222682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.757218ms"
	I0913 23:41:08.222774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.507µs"
	
	
	==> kube-proxy [83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:28:04.380224       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:28:04.489950       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.50"]
	E0913 23:28:04.490030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:28:04.594464       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:28:04.594495       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:28:04.594519       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:28:04.603873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:28:04.604221       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:28:04.604252       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:28:04.605991       1 config.go:199] "Starting service config controller"
	I0913 23:28:04.606001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:28:04.606031       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:28:04.606036       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:28:04.618190       1 config.go:328] "Starting node config controller"
	I0913 23:28:04.618220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:28:04.706337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:28:04.706402       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:28:04.718993       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f] <==
	W0913 23:27:54.609234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 23:27:54.609344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.615180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:27:54.615314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.634487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:54.634695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.650017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 23:27:54.650225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.663547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 23:27:54.663702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.739538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 23:27:54.739633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 23:27:54.802534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 23:27:54.802645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.915039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 23:27:54.915259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.056348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.056469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.122788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.122892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.209039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.209209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 23:27:57.297586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:41:04 addons-473197 kubelet[1197]: I0913 23:41:04.477460    1197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1adcfbe2-6ad2-4779-8429-55e9b080fc4c-gcp-creds\") pod \"hello-world-app-55bf9c44b4-jwks5\" (UID: \"1adcfbe2-6ad2-4779-8429-55e9b080fc4c\") " pod="default/hello-world-app-55bf9c44b4-jwks5"
	Sep 13 23:41:04 addons-473197 kubelet[1197]: E0913 23:41:04.969967    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a"
	Sep 13 23:41:05 addons-473197 kubelet[1197]: I0913 23:41:05.586488    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh4fx\" (UniqueName: \"kubernetes.io/projected/3db76d21-1e5d-4ece-8925-c84d0df606bf-kube-api-access-dh4fx\") pod \"3db76d21-1e5d-4ece-8925-c84d0df606bf\" (UID: \"3db76d21-1e5d-4ece-8925-c84d0df606bf\") "
	Sep 13 23:41:05 addons-473197 kubelet[1197]: I0913 23:41:05.592240    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db76d21-1e5d-4ece-8925-c84d0df606bf-kube-api-access-dh4fx" (OuterVolumeSpecName: "kube-api-access-dh4fx") pod "3db76d21-1e5d-4ece-8925-c84d0df606bf" (UID: "3db76d21-1e5d-4ece-8925-c84d0df606bf"). InnerVolumeSpecName "kube-api-access-dh4fx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:41:05 addons-473197 kubelet[1197]: I0913 23:41:05.687085    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dh4fx\" (UniqueName: \"kubernetes.io/projected/3db76d21-1e5d-4ece-8925-c84d0df606bf-kube-api-access-dh4fx\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.183097    1197 scope.go:117] "RemoveContainer" containerID="636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.206920    1197 scope.go:117] "RemoveContainer" containerID="636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: E0913 23:41:06.207458    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b\": container with ID starting with 636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b not found: ID does not exist" containerID="636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.207503    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b"} err="failed to get container status \"636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b\": rpc error: code = NotFound desc = could not find container \"636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b\": container with ID starting with 636eac7cca31bef8fbd0f0479e6f7c3eb1128bf01eba49dd7af98dabb05d217b not found: ID does not exist"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.970331    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0961900f-ad3b-4819-9cd0-dd2af3ec16ee" path="/var/lib/kubelet/pods/0961900f-ad3b-4819-9cd0-dd2af3ec16ee/volumes"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.970807    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ad7da1-a367-4515-a20c-f6a699a7b7b8" path="/var/lib/kubelet/pods/38ad7da1-a367-4515-a20c-f6a699a7b7b8/volumes"
	Sep 13 23:41:06 addons-473197 kubelet[1197]: I0913 23:41:06.971278    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db76d21-1e5d-4ece-8925-c84d0df606bf" path="/var/lib/kubelet/pods/3db76d21-1e5d-4ece-8925-c84d0df606bf/volumes"
	Sep 13 23:41:07 addons-473197 kubelet[1197]: E0913 23:41:07.596022    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270867595542411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:41:07 addons-473197 kubelet[1197]: E0913 23:41:07.596054    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270867595542411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.122774    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f9176c8-fc0c-4357-b26c-f7d80c3527af-webhook-cert\") pod \"6f9176c8-fc0c-4357-b26c-f7d80c3527af\" (UID: \"6f9176c8-fc0c-4357-b26c-f7d80c3527af\") "
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.122845    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wltnz\" (UniqueName: \"kubernetes.io/projected/6f9176c8-fc0c-4357-b26c-f7d80c3527af-kube-api-access-wltnz\") pod \"6f9176c8-fc0c-4357-b26c-f7d80c3527af\" (UID: \"6f9176c8-fc0c-4357-b26c-f7d80c3527af\") "
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.125414    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9176c8-fc0c-4357-b26c-f7d80c3527af-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6f9176c8-fc0c-4357-b26c-f7d80c3527af" (UID: "6f9176c8-fc0c-4357-b26c-f7d80c3527af"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.126223    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9176c8-fc0c-4357-b26c-f7d80c3527af-kube-api-access-wltnz" (OuterVolumeSpecName: "kube-api-access-wltnz") pod "6f9176c8-fc0c-4357-b26c-f7d80c3527af" (UID: "6f9176c8-fc0c-4357-b26c-f7d80c3527af"). InnerVolumeSpecName "kube-api-access-wltnz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.213361    1197 scope.go:117] "RemoveContainer" containerID="5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e"
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.224380    1197 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f9176c8-fc0c-4357-b26c-f7d80c3527af-webhook-cert\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.224423    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wltnz\" (UniqueName: \"kubernetes.io/projected/6f9176c8-fc0c-4357-b26c-f7d80c3527af-kube-api-access-wltnz\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.244312    1197 scope.go:117] "RemoveContainer" containerID="5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e"
	Sep 13 23:41:10 addons-473197 kubelet[1197]: E0913 23:41:10.245255    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e\": container with ID starting with 5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e not found: ID does not exist" containerID="5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e"
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.245330    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e"} err="failed to get container status \"5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e\": rpc error: code = NotFound desc = could not find container \"5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e\": container with ID starting with 5393c81e3d84ac3bf4fe099094eb818c4e173989007e13ac142d2c46769ed82e not found: ID does not exist"
	Sep 13 23:41:10 addons-473197 kubelet[1197]: I0913 23:41:10.969587    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9176c8-fc0c-4357-b26c-f7d80c3527af" path="/var/lib/kubelet/pods/6f9176c8-fc0c-4357-b26c-f7d80c3527af/volumes"
	
	
	==> storage-provisioner [c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d] <==
	I0913 23:28:10.804057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:28:11.078500       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:28:11.078567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:28:11.120016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:28:11.124355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c048df4-0a4e-4b96-9f0e-8fcf6762cf64", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8 became leader
	I0913 23:28:11.124757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	I0913 23:28:11.226238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-473197 -n addons-473197
helpers_test.go:261: (dbg) Run:  kubectl --context addons-473197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-473197 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-473197 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-473197/192.168.39.50
	Start Time:       Fri, 13 Sep 2024 23:29:31 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj4pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nj4pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-473197
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m59s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    103s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (329.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.391101ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005053846s
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (73.656848ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 9m37.240911366s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (80.170713ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 9m40.604193612s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (67.560961ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 9m46.657736747s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (96.413288ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 9m52.111880973s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (79.658703ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 10m4.406045971s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (66.426592ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 10m13.800314454s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (74.589685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 10m30.324539083s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (64.050813ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 10m58.903925244s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (64.93511ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 12m11.321930815s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (61.086302ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 13m38.23809825s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-473197 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-473197 top pods -n kube-system: exit status 1 (63.532177ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kx4xn, age: 14m58.37989839s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-473197 -n addons-473197
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 logs -n 25: (1.358503004s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-551384                                                                     | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-763760                                                                     | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | binary-mirror-510431                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40845                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510431                                                                     | binary-mirror-510431 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-473197 --wait=true                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:37 UTC | 13 Sep 24 23:37 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-473197 ssh cat                                                                       | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | -p addons-473197                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | addons-473197                                                                               |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-473197 ip                                                                            | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC | 13 Sep 24 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-473197 ssh curl -s                                                                   | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-473197 ip                                                                            | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473197 addons disable                                                                | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:41 UTC | 13 Sep 24 23:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-473197 addons                                                                        | addons-473197        | jenkins | v1.34.0 | 13 Sep 24 23:43 UTC | 13 Sep 24 23:43 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:27:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:27:19.727478   13355 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:27:19.727577   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727584   13355 out.go:358] Setting ErrFile to fd 2...
	I0913 23:27:19.727589   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:19.727825   13355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:27:19.728488   13355 out.go:352] Setting JSON to false
	I0913 23:27:19.729317   13355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":586,"bootTime":1726269454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:27:19.729406   13355 start.go:139] virtualization: kvm guest
	I0913 23:27:19.731822   13355 out.go:177] * [addons-473197] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:27:19.733210   13355 notify.go:220] Checking for updates...
	I0913 23:27:19.733237   13355 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:27:19.734712   13355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:27:19.735976   13355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:27:19.737182   13355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:19.738438   13355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:27:19.739925   13355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:27:19.741131   13355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:27:19.775615   13355 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 23:27:19.777213   13355 start.go:297] selected driver: kvm2
	I0913 23:27:19.777235   13355 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:27:19.777247   13355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:27:19.777996   13355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.778088   13355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:27:19.793811   13355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:27:19.793861   13355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:27:19.794087   13355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:27:19.794117   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:19.794161   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:19.794171   13355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:27:19.794217   13355 start.go:340] cluster config:
	{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:19.794313   13355 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:19.796337   13355 out.go:177] * Starting "addons-473197" primary control-plane node in "addons-473197" cluster
	I0913 23:27:19.797380   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:19.797422   13355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:19.797444   13355 cache.go:56] Caching tarball of preloaded images
	I0913 23:27:19.797531   13355 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:27:19.797549   13355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:27:19.797846   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:19.797865   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json: {Name:mkc3a28348c95a05c47c4230656de6866b98328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:19.798004   13355 start.go:360] acquireMachinesLock for addons-473197: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:27:19.798046   13355 start.go:364] duration metric: took 28.71µs to acquireMachinesLock for "addons-473197"
	I0913 23:27:19.798062   13355 start.go:93] Provisioning new machine with config: &{Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:27:19.798113   13355 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 23:27:19.799714   13355 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 23:27:19.799890   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:27:19.799928   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:27:19.814905   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0913 23:27:19.815364   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:27:19.815966   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:27:19.815989   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:27:19.816395   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:27:19.816630   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:19.816779   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:19.816997   13355 start.go:159] libmachine.API.Create for "addons-473197" (driver="kvm2")
	I0913 23:27:19.817032   13355 client.go:168] LocalClient.Create starting
	I0913 23:27:19.817080   13355 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:27:19.909228   13355 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:27:19.970689   13355 main.go:141] libmachine: Running pre-create checks...
	I0913 23:27:19.970714   13355 main.go:141] libmachine: (addons-473197) Calling .PreCreateCheck
	I0913 23:27:19.971194   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:19.971662   13355 main.go:141] libmachine: Creating machine...
	I0913 23:27:19.971677   13355 main.go:141] libmachine: (addons-473197) Calling .Create
	I0913 23:27:19.971844   13355 main.go:141] libmachine: (addons-473197) Creating KVM machine...
	I0913 23:27:19.973234   13355 main.go:141] libmachine: (addons-473197) DBG | found existing default KVM network
	I0913 23:27:19.974016   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:19.973849   13377 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0913 23:27:19.974095   13355 main.go:141] libmachine: (addons-473197) DBG | created network xml: 
	I0913 23:27:19.974122   13355 main.go:141] libmachine: (addons-473197) DBG | <network>
	I0913 23:27:19.974136   13355 main.go:141] libmachine: (addons-473197) DBG |   <name>mk-addons-473197</name>
	I0913 23:27:19.974149   13355 main.go:141] libmachine: (addons-473197) DBG |   <dns enable='no'/>
	I0913 23:27:19.974157   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974171   13355 main.go:141] libmachine: (addons-473197) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 23:27:19.974179   13355 main.go:141] libmachine: (addons-473197) DBG |     <dhcp>
	I0913 23:27:19.974184   13355 main.go:141] libmachine: (addons-473197) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 23:27:19.974189   13355 main.go:141] libmachine: (addons-473197) DBG |     </dhcp>
	I0913 23:27:19.974194   13355 main.go:141] libmachine: (addons-473197) DBG |   </ip>
	I0913 23:27:19.974216   13355 main.go:141] libmachine: (addons-473197) DBG |   
	I0913 23:27:19.974226   13355 main.go:141] libmachine: (addons-473197) DBG | </network>
	I0913 23:27:19.974233   13355 main.go:141] libmachine: (addons-473197) DBG | 
	I0913 23:27:19.980176   13355 main.go:141] libmachine: (addons-473197) DBG | trying to create private KVM network mk-addons-473197 192.168.39.0/24...
	I0913 23:27:20.045910   13355 main.go:141] libmachine: (addons-473197) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.045940   13355 main.go:141] libmachine: (addons-473197) DBG | private KVM network mk-addons-473197 192.168.39.0/24 created
	I0913 23:27:20.045954   13355 main.go:141] libmachine: (addons-473197) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:27:20.046047   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.045834   13377 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.046087   13355 main.go:141] libmachine: (addons-473197) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:27:20.298677   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.298568   13377 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa...
	I0913 23:27:20.458808   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458662   13377 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk...
	I0913 23:27:20.458837   13355 main.go:141] libmachine: (addons-473197) DBG | Writing magic tar header
	I0913 23:27:20.458849   13355 main.go:141] libmachine: (addons-473197) DBG | Writing SSH key tar header
	I0913 23:27:20.458859   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:20.458774   13377 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 ...
	I0913 23:27:20.458873   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197
	I0913 23:27:20.458907   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197 (perms=drwx------)
	I0913 23:27:20.458937   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:27:20.458947   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:27:20.458964   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:20.458975   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:27:20.458985   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:27:20.459015   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:27:20.459028   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:27:20.459044   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:27:20.459058   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:27:20.459067   13355 main.go:141] libmachine: (addons-473197) DBG | Checking permissions on dir: /home
	I0913 23:27:20.459081   13355 main.go:141] libmachine: (addons-473197) DBG | Skipping /home - not owner
	I0913 23:27:20.459096   13355 main.go:141] libmachine: (addons-473197) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:27:20.459111   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:20.459993   13355 main.go:141] libmachine: (addons-473197) define libvirt domain using xml: 
	I0913 23:27:20.460017   13355 main.go:141] libmachine: (addons-473197) <domain type='kvm'>
	I0913 23:27:20.460026   13355 main.go:141] libmachine: (addons-473197)   <name>addons-473197</name>
	I0913 23:27:20.460037   13355 main.go:141] libmachine: (addons-473197)   <memory unit='MiB'>4000</memory>
	I0913 23:27:20.460042   13355 main.go:141] libmachine: (addons-473197)   <vcpu>2</vcpu>
	I0913 23:27:20.460054   13355 main.go:141] libmachine: (addons-473197)   <features>
	I0913 23:27:20.460079   13355 main.go:141] libmachine: (addons-473197)     <acpi/>
	I0913 23:27:20.460098   13355 main.go:141] libmachine: (addons-473197)     <apic/>
	I0913 23:27:20.460109   13355 main.go:141] libmachine: (addons-473197)     <pae/>
	I0913 23:27:20.460119   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460142   13355 main.go:141] libmachine: (addons-473197)   </features>
	I0913 23:27:20.460165   13355 main.go:141] libmachine: (addons-473197)   <cpu mode='host-passthrough'>
	I0913 23:27:20.460178   13355 main.go:141] libmachine: (addons-473197)   
	I0913 23:27:20.460200   13355 main.go:141] libmachine: (addons-473197)   </cpu>
	I0913 23:27:20.460208   13355 main.go:141] libmachine: (addons-473197)   <os>
	I0913 23:27:20.460213   13355 main.go:141] libmachine: (addons-473197)     <type>hvm</type>
	I0913 23:27:20.460220   13355 main.go:141] libmachine: (addons-473197)     <boot dev='cdrom'/>
	I0913 23:27:20.460226   13355 main.go:141] libmachine: (addons-473197)     <boot dev='hd'/>
	I0913 23:27:20.460238   13355 main.go:141] libmachine: (addons-473197)     <bootmenu enable='no'/>
	I0913 23:27:20.460250   13355 main.go:141] libmachine: (addons-473197)   </os>
	I0913 23:27:20.460265   13355 main.go:141] libmachine: (addons-473197)   <devices>
	I0913 23:27:20.460282   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='cdrom'>
	I0913 23:27:20.460301   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/boot2docker.iso'/>
	I0913 23:27:20.460326   13355 main.go:141] libmachine: (addons-473197)       <target dev='hdc' bus='scsi'/>
	I0913 23:27:20.460339   13355 main.go:141] libmachine: (addons-473197)       <readonly/>
	I0913 23:27:20.460345   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460351   13355 main.go:141] libmachine: (addons-473197)     <disk type='file' device='disk'>
	I0913 23:27:20.460361   13355 main.go:141] libmachine: (addons-473197)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:27:20.460368   13355 main.go:141] libmachine: (addons-473197)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/addons-473197.rawdisk'/>
	I0913 23:27:20.460375   13355 main.go:141] libmachine: (addons-473197)       <target dev='hda' bus='virtio'/>
	I0913 23:27:20.460379   13355 main.go:141] libmachine: (addons-473197)     </disk>
	I0913 23:27:20.460385   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460390   13355 main.go:141] libmachine: (addons-473197)       <source network='mk-addons-473197'/>
	I0913 23:27:20.460397   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460401   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460408   13355 main.go:141] libmachine: (addons-473197)     <interface type='network'>
	I0913 23:27:20.460413   13355 main.go:141] libmachine: (addons-473197)       <source network='default'/>
	I0913 23:27:20.460419   13355 main.go:141] libmachine: (addons-473197)       <model type='virtio'/>
	I0913 23:27:20.460424   13355 main.go:141] libmachine: (addons-473197)     </interface>
	I0913 23:27:20.460430   13355 main.go:141] libmachine: (addons-473197)     <serial type='pty'>
	I0913 23:27:20.460446   13355 main.go:141] libmachine: (addons-473197)       <target port='0'/>
	I0913 23:27:20.460463   13355 main.go:141] libmachine: (addons-473197)     </serial>
	I0913 23:27:20.460475   13355 main.go:141] libmachine: (addons-473197)     <console type='pty'>
	I0913 23:27:20.460492   13355 main.go:141] libmachine: (addons-473197)       <target type='serial' port='0'/>
	I0913 23:27:20.460504   13355 main.go:141] libmachine: (addons-473197)     </console>
	I0913 23:27:20.460514   13355 main.go:141] libmachine: (addons-473197)     <rng model='virtio'>
	I0913 23:27:20.460527   13355 main.go:141] libmachine: (addons-473197)       <backend model='random'>/dev/random</backend>
	I0913 23:27:20.460540   13355 main.go:141] libmachine: (addons-473197)     </rng>
	I0913 23:27:20.460548   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460554   13355 main.go:141] libmachine: (addons-473197)     
	I0913 23:27:20.460564   13355 main.go:141] libmachine: (addons-473197)   </devices>
	I0913 23:27:20.460574   13355 main.go:141] libmachine: (addons-473197) </domain>
	I0913 23:27:20.460592   13355 main.go:141] libmachine: (addons-473197) 
	I0913 23:27:20.466244   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:75:c0:ca in network default
	I0913 23:27:20.467639   13355 main.go:141] libmachine: (addons-473197) Ensuring networks are active...
	I0913 23:27:20.467669   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:20.468356   13355 main.go:141] libmachine: (addons-473197) Ensuring network default is active
	I0913 23:27:20.468605   13355 main.go:141] libmachine: (addons-473197) Ensuring network mk-addons-473197 is active
	I0913 23:27:20.469014   13355 main.go:141] libmachine: (addons-473197) Getting domain xml...
	I0913 23:27:20.469710   13355 main.go:141] libmachine: (addons-473197) Creating domain...
	I0913 23:27:21.903658   13355 main.go:141] libmachine: (addons-473197) Waiting to get IP...
	I0913 23:27:21.904363   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:21.904874   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:21.904902   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:21.904817   13377 retry.go:31] will retry after 304.697765ms: waiting for machine to come up
	I0913 23:27:22.211392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.211878   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.211895   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.211847   13377 retry.go:31] will retry after 296.206544ms: waiting for machine to come up
	I0913 23:27:22.509388   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.510038   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.510074   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.509984   13377 retry.go:31] will retry after 351.816954ms: waiting for machine to come up
	I0913 23:27:22.863507   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:22.863981   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:22.864012   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:22.863920   13377 retry.go:31] will retry after 530.240488ms: waiting for machine to come up
	I0913 23:27:23.395630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.396082   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.396145   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.396069   13377 retry.go:31] will retry after 548.533639ms: waiting for machine to come up
	I0913 23:27:23.945981   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:23.946426   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:23.946449   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:23.946390   13377 retry.go:31] will retry after 804.440442ms: waiting for machine to come up
	I0913 23:27:24.752386   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:24.752879   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:24.752901   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:24.752819   13377 retry.go:31] will retry after 784.165086ms: waiting for machine to come up
	I0913 23:27:25.538164   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:25.538541   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:25.538565   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:25.538498   13377 retry.go:31] will retry after 1.081622308s: waiting for machine to come up
	I0913 23:27:26.621460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:26.621931   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:26.621955   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:26.621857   13377 retry.go:31] will retry after 1.731303856s: waiting for machine to come up
	I0913 23:27:28.354521   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:28.355071   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:28.355099   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:28.355009   13377 retry.go:31] will retry after 1.496214945s: waiting for machine to come up
	I0913 23:27:29.852809   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:29.853265   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:29.853301   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:29.853227   13377 retry.go:31] will retry after 2.460158583s: waiting for machine to come up
	I0913 23:27:32.316929   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:32.317410   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:32.317431   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:32.317373   13377 retry.go:31] will retry after 3.034476235s: waiting for machine to come up
	I0913 23:27:35.353176   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:35.353654   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find current IP address of domain addons-473197 in network mk-addons-473197
	I0913 23:27:35.353699   13355 main.go:141] libmachine: (addons-473197) DBG | I0913 23:27:35.353589   13377 retry.go:31] will retry after 4.290331524s: waiting for machine to come up
	I0913 23:27:39.649352   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650002   13355 main.go:141] libmachine: (addons-473197) Found IP for machine: 192.168.39.50
	I0913 23:27:39.650019   13355 main.go:141] libmachine: (addons-473197) Reserving static IP address...
	I0913 23:27:39.650027   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has current primary IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.650461   13355 main.go:141] libmachine: (addons-473197) DBG | unable to find host DHCP lease matching {name: "addons-473197", mac: "52:54:00:2d:a5:2e", ip: "192.168.39.50"} in network mk-addons-473197
	I0913 23:27:39.721216   13355 main.go:141] libmachine: (addons-473197) DBG | Getting to WaitForSSH function...
	I0913 23:27:39.721243   13355 main.go:141] libmachine: (addons-473197) Reserved static IP address: 192.168.39.50
	I0913 23:27:39.721278   13355 main.go:141] libmachine: (addons-473197) Waiting for SSH to be available...
	I0913 23:27:39.723998   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724611   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.724638   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.724950   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH client type: external
	I0913 23:27:39.724977   13355 main.go:141] libmachine: (addons-473197) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa (-rw-------)
	I0913 23:27:39.725008   13355 main.go:141] libmachine: (addons-473197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:27:39.725021   13355 main.go:141] libmachine: (addons-473197) DBG | About to run SSH command:
	I0913 23:27:39.725036   13355 main.go:141] libmachine: (addons-473197) DBG | exit 0
	I0913 23:27:39.855960   13355 main.go:141] libmachine: (addons-473197) DBG | SSH cmd err, output: <nil>: 
	I0913 23:27:39.856254   13355 main.go:141] libmachine: (addons-473197) KVM machine creation complete!
	I0913 23:27:39.856646   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:39.857244   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857451   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:39.857626   13355 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:27:39.857643   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:27:39.858795   13355 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:27:39.858808   13355 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:27:39.858813   13355 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:27:39.858832   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.861250   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861689   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.861723   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.861906   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.862060   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862212   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.862395   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.862569   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.862742   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.862751   13355 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:27:39.967145   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:39.967169   13355 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:27:39.967179   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:39.969704   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970052   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:39.970076   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:39.970268   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:39.970477   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970645   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:39.970782   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:39.970951   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:39.971103   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:39.971115   13355 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:27:40.076316   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:27:40.076451   13355 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:27:40.076469   13355 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:27:40.076484   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076736   13355 buildroot.go:166] provisioning hostname "addons-473197"
	I0913 23:27:40.076759   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.076929   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.079647   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080051   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.080075   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.080207   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.080376   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080576   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.080715   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.080902   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.081066   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.081078   13355 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-473197 && echo "addons-473197" | sudo tee /etc/hostname
	I0913 23:27:40.201203   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-473197
	
	I0913 23:27:40.201232   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.203941   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204266   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.204295   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.204445   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.204612   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204717   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.204938   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.205096   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.205257   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.205288   13355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-473197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-473197/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-473197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:27:40.315830   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:40.315864   13355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:27:40.315886   13355 buildroot.go:174] setting up certificates
	I0913 23:27:40.315900   13355 provision.go:84] configureAuth start
	I0913 23:27:40.315916   13355 main.go:141] libmachine: (addons-473197) Calling .GetMachineName
	I0913 23:27:40.316174   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:40.318560   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.318909   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.318938   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.319047   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.320812   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321063   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.321089   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.321172   13355 provision.go:143] copyHostCerts
	I0913 23:27:40.321244   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:27:40.321370   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:27:40.321425   13355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:27:40.321473   13355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.addons-473197 san=[127.0.0.1 192.168.39.50 addons-473197 localhost minikube]
	I0913 23:27:40.603148   13355 provision.go:177] copyRemoteCerts
	I0913 23:27:40.603210   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:27:40.603234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.606258   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.606705   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.606739   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.607033   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.607251   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.607362   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.607463   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:40.689713   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:27:40.712453   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:27:40.735387   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:27:40.757966   13355 provision.go:87] duration metric: took 442.049406ms to configureAuth
	I0913 23:27:40.758001   13355 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:27:40.758169   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:27:40.758238   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.760689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761096   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.761116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.761352   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.761591   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761778   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.761925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.762072   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:40.762249   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:40.762265   13355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:27:40.978781   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:27:40.978810   13355 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:27:40.978820   13355 main.go:141] libmachine: (addons-473197) Calling .GetURL
	I0913 23:27:40.980184   13355 main.go:141] libmachine: (addons-473197) DBG | Using libvirt version 6000000
	I0913 23:27:40.982058   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982375   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.982407   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.982552   13355 main.go:141] libmachine: Docker is up and running!
	I0913 23:27:40.982564   13355 main.go:141] libmachine: Reticulating splines...
	I0913 23:27:40.982573   13355 client.go:171] duration metric: took 21.165531853s to LocalClient.Create
	I0913 23:27:40.982600   13355 start.go:167] duration metric: took 21.165604233s to libmachine.API.Create "addons-473197"
	I0913 23:27:40.982612   13355 start.go:293] postStartSetup for "addons-473197" (driver="kvm2")
	I0913 23:27:40.982626   13355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:40.982643   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:40.982883   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:40.982909   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:40.985049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985372   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:40.985397   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:40.985529   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:40.985759   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:40.985932   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:40.986038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.069472   13355 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:27:41.073428   13355 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:27:41.073453   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:27:41.073517   13355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:27:41.073538   13355 start.go:296] duration metric: took 90.917797ms for postStartSetup
	I0913 23:27:41.073579   13355 main.go:141] libmachine: (addons-473197) Calling .GetConfigRaw
	I0913 23:27:41.074107   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.077174   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.077818   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.077852   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.078209   13355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/config.json ...
	I0913 23:27:41.078430   13355 start.go:128] duration metric: took 21.280308685s to createHost
	I0913 23:27:41.078523   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.080871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081492   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.081509   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.081740   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.081948   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082106   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.082226   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.082357   13355 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:41.082590   13355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0913 23:27:41.082607   13355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:27:41.188427   13355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726270061.160461194
	
	I0913 23:27:41.188463   13355 fix.go:216] guest clock: 1726270061.160461194
	I0913 23:27:41.188474   13355 fix.go:229] Guest: 2024-09-13 23:27:41.160461194 +0000 UTC Remote: 2024-09-13 23:27:41.078444881 +0000 UTC m=+21.385670707 (delta=82.016313ms)
	I0913 23:27:41.188531   13355 fix.go:200] guest clock delta is within tolerance: 82.016313ms
	I0913 23:27:41.188539   13355 start.go:83] releasing machines lock for "addons-473197", held for 21.390482943s
	I0913 23:27:41.188568   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.188834   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:41.191630   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192076   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.192098   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.192320   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192816   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.192990   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:27:41.193060   13355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:27:41.193115   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.193231   13355 ssh_runner.go:195] Run: cat /version.json
	I0913 23:27:41.193263   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:27:41.195906   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196214   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196366   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196541   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196670   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:41.196705   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:41.196706   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.196834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:27:41.196880   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197034   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:27:41.197031   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.197160   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:27:41.197329   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:27:41.272290   13355 ssh_runner.go:195] Run: systemctl --version
	I0913 23:27:41.309754   13355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:27:41.465120   13355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:27:41.470808   13355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:27:41.470872   13355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:41.486194   13355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:27:41.486219   13355 start.go:495] detecting cgroup driver to use...
	I0913 23:27:41.486277   13355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:27:41.501356   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:27:41.514148   13355 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:27:41.514201   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:27:41.526902   13355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:27:41.539813   13355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:27:41.653998   13355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:27:41.795256   13355 docker.go:233] disabling docker service ...
	I0913 23:27:41.795338   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:27:41.808732   13355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:27:41.820663   13355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:27:41.960800   13355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:27:42.071315   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:27:42.085863   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:42.104721   13355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:27:42.104778   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.115928   13355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:27:42.116006   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.126630   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.136692   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.146840   13355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:42.158680   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.169310   13355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.187197   13355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:27:42.197346   13355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:42.206456   13355 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:27:42.206517   13355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:27:42.218600   13355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:42.228617   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:42.336875   13355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:27:42.432370   13355 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:27:42.432459   13355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:27:42.436970   13355 start.go:563] Will wait 60s for crictl version
	I0913 23:27:42.437040   13355 ssh_runner.go:195] Run: which crictl
	I0913 23:27:42.440590   13355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:27:42.475674   13355 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:27:42.475820   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.501858   13355 ssh_runner.go:195] Run: crio --version
	I0913 23:27:42.529367   13355 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:27:42.530946   13355 main.go:141] libmachine: (addons-473197) Calling .GetIP
	I0913 23:27:42.533556   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.533907   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:27:42.533934   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:27:42.534104   13355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:42.537936   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:42.549881   13355 kubeadm.go:883] updating cluster {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:42.549978   13355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:42.550015   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:42.581270   13355 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 23:27:42.581333   13355 ssh_runner.go:195] Run: which lz4
	I0913 23:27:42.584936   13355 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 23:27:42.588777   13355 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 23:27:42.588812   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 23:27:43.814973   13355 crio.go:462] duration metric: took 1.230077023s to copy over tarball
	I0913 23:27:43.815032   13355 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 23:27:45.932346   13355 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117279223s)
	I0913 23:27:45.932374   13355 crio.go:469] duration metric: took 2.117376082s to extract the tarball
	I0913 23:27:45.932383   13355 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 23:27:45.968777   13355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:27:46.009560   13355 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 23:27:46.009591   13355 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:27:46.009602   13355 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I0913 23:27:46.009706   13355 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-473197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:27:46.009801   13355 ssh_runner.go:195] Run: crio config
	I0913 23:27:46.058212   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:46.058233   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:46.058242   13355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:46.058265   13355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-473197 NodeName:addons-473197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:46.058390   13355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-473197"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:46.058449   13355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:46.067747   13355 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:27:46.067836   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:46.076323   13355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:27:46.091845   13355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:46.107011   13355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0913 23:27:46.122091   13355 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:46.125699   13355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:46.136584   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:46.243887   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:46.259537   13355 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197 for IP: 192.168.39.50
	I0913 23:27:46.259566   13355 certs.go:194] generating shared ca certs ...
	I0913 23:27:46.259587   13355 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.259827   13355 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:27:46.322225   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt ...
	I0913 23:27:46.322258   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt: {Name:mke46b90c0d6e2a0d22a599cb0925a94af7cb890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322470   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key ...
	I0913 23:27:46.322490   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key: {Name:mkeed16d615b1d7b45fa5c87fb359fe1941c704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.322591   13355 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:27:46.462878   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt ...
	I0913 23:27:46.462907   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt: {Name:mk6b1da2351e5a548bbce01c78eb8ec03bbc9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463051   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key ...
	I0913 23:27:46.463061   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key: {Name:mk7ea15f150fb9588b92c5379cfdb24690c332b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.463123   13355 certs.go:256] generating profile certs ...
	I0913 23:27:46.463171   13355 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key
	I0913 23:27:46.463184   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt with IP's: []
	I0913 23:27:46.657652   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt ...
	I0913 23:27:46.657686   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: {Name:mk5f50c2130cbf6a4ae973b8a645d8dcfcea5e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657857   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key ...
	I0913 23:27:46.657870   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.key: {Name:mk3ec218d1db7592ee3144e8458afc6e59c3670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.657934   13355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74
	I0913 23:27:46.657951   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0913 23:27:46.879416   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 ...
	I0913 23:27:46.879453   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74: {Name:mkcaab583500a609e501e4f9e7f67d24dbf8d267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879638   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 ...
	I0913 23:27:46.879651   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74: {Name:mk892a816842ba211b137a4d62befccce1e5b073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.879724   13355 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt
	I0913 23:27:46.879814   13355 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key.44267d74 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key
	I0913 23:27:46.879862   13355 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key
	I0913 23:27:46.879879   13355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt with IP's: []
	I0913 23:27:46.991498   13355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt ...
	I0913 23:27:46.991530   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt: {Name:mkb643e56ac833ce28178330ec7aa1dda3e56b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991685   13355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key ...
	I0913 23:27:46.991696   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key: {Name:mka2351863ee87552b80a1470ad4d30098e9cd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:46.991874   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:27:46.991908   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:27:46.991933   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:46.991956   13355 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:27:46.992518   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:47.019880   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:27:47.046183   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:47.074948   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:27:47.097532   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 23:27:47.121957   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:27:47.146163   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:47.170775   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 23:27:47.194281   13355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:47.217329   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:47.233678   13355 ssh_runner.go:195] Run: openssl version
	I0913 23:27:47.239354   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:47.249994   13355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254467   13355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.254522   13355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:47.260224   13355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:47.270703   13355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:47.274594   13355 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:47.274645   13355 kubeadm.go:392] StartCluster: {Name:addons-473197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-473197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:47.274712   13355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 23:27:47.274753   13355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 23:27:47.309951   13355 cri.go:89] found id: ""
	I0913 23:27:47.310012   13355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:47.320386   13355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:47.330943   13355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:47.341759   13355 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:47.341780   13355 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:47.341834   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:47.351646   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:47.351717   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:47.361297   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:47.370696   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:47.370762   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:47.380638   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.389574   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:47.389643   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:47.398896   13355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:47.408606   13355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:47.408676   13355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:47.418572   13355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:27:47.479386   13355 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:47.479472   13355 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:47.586391   13355 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:47.586505   13355 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:47.586582   13355 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:47.595987   13355 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:47.760778   13355 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:47.760900   13355 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:47.760974   13355 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:47.761064   13355 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:47.820089   13355 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:47.938680   13355 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:48.078014   13355 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:48.155692   13355 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:48.155847   13355 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.397795   13355 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:48.397964   13355 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-473197 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0913 23:27:48.511295   13355 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:48.569260   13355 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:48.662216   13355 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:48.662475   13355 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:48.761318   13355 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:49.204225   13355 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:49.285052   13355 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:49.530932   13355 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:49.596255   13355 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:49.596809   13355 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:49.599274   13355 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:49.601179   13355 out.go:235]   - Booting up control plane ...
	I0913 23:27:49.601276   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:49.601348   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:49.601425   13355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:49.616053   13355 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:49.622415   13355 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:49.622489   13355 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:49.742292   13355 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:49.742405   13355 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:50.257638   13355 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 516.207513ms
	I0913 23:27:50.257765   13355 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:56.254726   13355 kubeadm.go:310] [api-check] The API server is healthy after 6.001344082s
	I0913 23:27:56.266993   13355 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:56.292355   13355 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:56.323160   13355 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:56.323401   13355 kubeadm.go:310] [mark-control-plane] Marking the node addons-473197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:56.339238   13355 kubeadm.go:310] [bootstrap-token] Using token: 39ittl.8h26ubvfwyg116f4
	I0913 23:27:56.340707   13355 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:56.340853   13355 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:56.349574   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:56.357917   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:56.365875   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:56.370732   13355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:56.375167   13355 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:56.666388   13355 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:57.109792   13355 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:57.661157   13355 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:57.662033   13355 kubeadm.go:310] 
	I0913 23:27:57.662163   13355 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:57.662184   13355 kubeadm.go:310] 
	I0913 23:27:57.662303   13355 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:57.662326   13355 kubeadm.go:310] 
	I0913 23:27:57.662361   13355 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:57.662417   13355 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:57.662496   13355 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:57.662508   13355 kubeadm.go:310] 
	I0913 23:27:57.662586   13355 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:57.662598   13355 kubeadm.go:310] 
	I0913 23:27:57.662671   13355 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:57.662687   13355 kubeadm.go:310] 
	I0913 23:27:57.662760   13355 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:57.662855   13355 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:57.662958   13355 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:57.662976   13355 kubeadm.go:310] 
	I0913 23:27:57.663089   13355 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:57.663197   13355 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:57.663210   13355 kubeadm.go:310] 
	I0913 23:27:57.663318   13355 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663464   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0913 23:27:57.663493   13355 kubeadm.go:310] 	--control-plane 
	I0913 23:27:57.663502   13355 kubeadm.go:310] 
	I0913 23:27:57.663615   13355 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:57.663626   13355 kubeadm.go:310] 
	I0913 23:27:57.663737   13355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 39ittl.8h26ubvfwyg116f4 \
	I0913 23:27:57.663903   13355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0913 23:27:57.665427   13355 kubeadm.go:310] W0913 23:27:47.456712     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665734   13355 kubeadm.go:310] W0913 23:27:47.457675     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:57.665846   13355 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:57.665879   13355 cni.go:84] Creating CNI manager for ""
	I0913 23:27:57.665892   13355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:57.667738   13355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:57.668898   13355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:57.681342   13355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:57.704842   13355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:57.704978   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:57.705001   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-473197 minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-473197 minikube.k8s.io/primary=true
	I0913 23:27:57.725824   13355 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:57.846283   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.347074   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:58.847401   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.346340   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:59.846585   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.346364   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:00.846560   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.347311   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:01.847237   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.346723   13355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:28:02.425605   13355 kubeadm.go:1113] duration metric: took 4.720714541s to wait for elevateKubeSystemPrivileges
	I0913 23:28:02.425645   13355 kubeadm.go:394] duration metric: took 15.151004151s to StartCluster
	I0913 23:28:02.425662   13355 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.425785   13355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:28:02.426125   13355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:28:02.426288   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:28:02.426308   13355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:28:02.426365   13355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:28:02.426474   13355 addons.go:69] Setting yakd=true in profile "addons-473197"
	I0913 23:28:02.426504   13355 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-473197"
	I0913 23:28:02.426508   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.426517   13355 addons.go:234] Setting addon yakd=true in "addons-473197"
	I0913 23:28:02.426521   13355 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-473197"
	I0913 23:28:02.426514   13355 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-473197"
	I0913 23:28:02.426549   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426556   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426559   13355 addons.go:69] Setting helm-tiller=true in profile "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon helm-tiller=true in "addons-473197"
	I0913 23:28:02.426574   13355 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:02.426596   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426597   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426602   13355 addons.go:69] Setting ingress=true in profile "addons-473197"
	I0913 23:28:02.426631   13355 addons.go:234] Setting addon ingress=true in "addons-473197"
	I0913 23:28:02.426669   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.426477   13355 addons.go:69] Setting gcp-auth=true in profile "addons-473197"
	I0913 23:28:02.426731   13355 mustload.go:65] Loading cluster: addons-473197
	I0913 23:28:02.426862   13355 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-473197"
	I0913 23:28:02.426884   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-473197"
	I0913 23:28:02.426885   13355 config.go:182] Loaded profile config "addons-473197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:28:02.427037   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427060   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427061   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427085   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427129   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427152   13355 addons.go:69] Setting metrics-server=true in profile "addons-473197"
	I0913 23:28:02.427165   13355 addons.go:234] Setting addon metrics-server=true in "addons-473197"
	I0913 23:28:02.426553   13355 addons.go:69] Setting ingress-dns=true in profile "addons-473197"
	I0913 23:28:02.427179   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427190   13355 addons.go:69] Setting volcano=true in profile "addons-473197"
	I0913 23:28:02.427201   13355 addons.go:234] Setting addon volcano=true in "addons-473197"
	I0913 23:28:02.427211   13355 addons.go:69] Setting registry=true in profile "addons-473197"
	I0913 23:28:02.427221   13355 addons.go:234] Setting addon registry=true in "addons-473197"
	I0913 23:28:02.427222   13355 addons.go:69] Setting storage-provisioner=true in profile "addons-473197"
	I0913 23:28:02.427145   13355 addons.go:69] Setting inspektor-gadget=true in profile "addons-473197"
	I0913 23:28:02.427230   13355 addons.go:69] Setting volumesnapshots=true in profile "addons-473197"
	I0913 23:28:02.427235   13355 addons.go:234] Setting addon storage-provisioner=true in "addons-473197"
	I0913 23:28:02.427239   13355 addons.go:234] Setting addon volumesnapshots=true in "addons-473197"
	I0913 23:28:02.427241   13355 addons.go:234] Setting addon inspektor-gadget=true in "addons-473197"
	I0913 23:28:02.427179   13355 addons.go:234] Setting addon ingress-dns=true in "addons-473197"
	I0913 23:28:02.426496   13355 addons.go:69] Setting cloud-spanner=true in profile "addons-473197"
	I0913 23:28:02.427256   13355 addons.go:234] Setting addon cloud-spanner=true in "addons-473197"
	I0913 23:28:02.426488   13355 addons.go:69] Setting default-storageclass=true in profile "addons-473197"
	I0913 23:28:02.427269   13355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-473197"
	I0913 23:28:02.427330   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427431   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427455   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427463   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427473   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427490   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427456   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427570   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427595   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427628   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427709   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427731   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427821   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427840   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.427846   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427873   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.427975   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.427998   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428068   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428087   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428114   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428139   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428165   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428185   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.428215   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.428298   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.428659   13355 out.go:177] * Verifying Kubernetes components...
	I0913 23:28:02.430798   13355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:28:02.443895   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0913 23:28:02.447836   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0913 23:28:02.460485   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0913 23:28:02.460874   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.460923   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.462824   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.462871   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475879   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.475930   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.475955   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476088   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476184   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.476509   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476527   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476781   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476799   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476841   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.476853   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.476873   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477429   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.477470   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.477700   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.477702   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.478295   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478318   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.478339   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.478341   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.490076   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I0913 23:28:02.490970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.491812   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.491835   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.492304   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.492616   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I0913 23:28:02.493134   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.493820   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.493836   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.494007   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.494989   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0913 23:28:02.497252   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.498698   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.498751   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.499326   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.501553   13355 addons.go:234] Setting addon default-storageclass=true in "addons-473197"
	I0913 23:28:02.501602   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.501968   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.502005   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.502350   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.502365   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.502909   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.503277   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.506570   13355 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-473197"
	I0913 23:28:02.506622   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.507002   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.507046   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.514594   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0913 23:28:02.514782   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0913 23:28:02.515367   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.516584   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.516605   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.517040   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.517686   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.517727   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.518257   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.518363   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0913 23:28:02.519018   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.519037   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.519440   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.519657   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.522038   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0913 23:28:02.522040   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:02.522394   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.522435   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.522727   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.522970   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.523316   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523334   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523520   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.523532   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.523938   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.524557   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.524603   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.525745   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0913 23:28:02.526425   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.526729   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0913 23:28:02.527167   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.527412   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.527429   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.527767   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.527985   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.528007   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.529141   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.529182   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.529468   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.529539   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.530024   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.530070   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.531195   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0913 23:28:02.531769   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0913 23:28:02.532339   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.532382   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.532396   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.532869   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.532894   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.533439   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.533682   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0913 23:28:02.534092   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0913 23:28:02.539974   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0913 23:28:02.540404   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.541583   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.541602   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.541629   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0913 23:28:02.541958   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.542365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.544696   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.546879   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0913 23:28:02.547431   13355 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:28:02.548325   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0913 23:28:02.549791   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:28:02.549808   13355 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:28:02.549834   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.552042   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0913 23:28:02.564110   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0913 23:28:02.564127   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.564132   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0913 23:28:02.564116   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564213   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.564232   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.564116   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0913 23:28:02.564383   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0913 23:28:02.564467   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564922   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.564951   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.564962   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.565052   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.565067   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.565129   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565819   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.565933   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.565964   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566027   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.566035   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566045   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.566054   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.566091   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566112   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566136   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566148   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.566172   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0913 23:28:02.567152   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567167   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567256   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567262   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567340   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567349   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.567474   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567480   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567531   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567546   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567556   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.567609   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.567654   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567664   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567713   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567738   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567747   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567749   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.567798   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567823   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.567915   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.567929   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.568085   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568102   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.568148   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568172   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568188   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568215   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568340   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568402   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568439   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.568464   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.568519   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.568665   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.568699   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569269   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.569415   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.569426   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.569482   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.569514   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.569816   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.570420   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.570455   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.570772   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.571101   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.571153   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.571923   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:02.571964   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:02.571931   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572189   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.572204   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.572331   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.572395   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.572403   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573462   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573488   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.573510   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.573528   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.574423   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:02.574434   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:02.574441   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:02.573549   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.575002   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:02.575025   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.575042   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:02.577246   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:02.577336   13355 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0913 23:28:02.577577   13355 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0913 23:28:02.577709   13355 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:28:02.578460   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.578635   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:28:02.578647   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0913 23:28:02.578665   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579318   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.579608   13355 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:02.579851   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:28:02.579873   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.579633   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:28:02.580938   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:02.580994   13355 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:28:02.582107   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:28:02.582192   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:28:02.582204   13355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:28:02.582234   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.583277   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.583707   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.583726   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.584022   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.584195   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.584220   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 23:28:02.584401   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.584539   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.584948   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.585611   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:02.585633   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 23:28:02.585650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.585705   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:28:02.585820   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.585840   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586103   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.586338   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.586392   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.586488   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.586648   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.586902   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.586918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.587228   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.587417   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.587574   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.587716   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.588599   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:28:02.589562   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.589986   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.590012   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.590304   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.590503   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.590650   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.590784   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.591169   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:28:02.592433   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:28:02.592982   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0913 23:28:02.593391   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.593880   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.593904   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.594231   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.594357   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:28:02.594365   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.594869   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I0913 23:28:02.595307   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.595835   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.595857   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.596170   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0913 23:28:02.596327   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:28:02.596346   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.596551   13355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:02.596571   13355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:28:02.596587   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.596642   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.597297   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.597523   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.597839   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:28:02.597858   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:28:02.597882   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.598022   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.598046   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.598359   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.598480   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.600521   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.601746   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.601897   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602020   13355 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:28:02.602261   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602280   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602309   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.602332   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.602578   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602638   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.602773   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.602928   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.602925   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.603038   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.603319   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.603369   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.604203   13355 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:28:02.605136   13355 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:28:02.605271   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:02.605291   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:28:02.605308   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.605737   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0913 23:28:02.606098   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.606452   13355 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:02.606469   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:28:02.606483   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.606619   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.606637   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.606672   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0913 23:28:02.607023   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.607041   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.607206   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.607447   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.607462   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.608306   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.608506   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.609969   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610193   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610631   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.610650   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.610800   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.610936   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.611209   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.611513   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.611607   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.611708   13355 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:28:02.611881   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612337   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.612359   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.612853   13355 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 23:28:02.612890   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.612853   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:28:02.612935   13355 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:28:02.612955   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.613679   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.614172   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.614301   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.614358   13355 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:02.614375   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 23:28:02.614391   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.616797   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617557   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617585   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617630   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.617685   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.617710   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0913 23:28:02.617846   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.617859   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.617871   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.618067   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.618131   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618188   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.618375   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.618533   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.618780   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.618907   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.618920   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.619116   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.619427   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.619639   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.621163   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.623020   13355 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:28:02.624487   13355 out.go:177]   - Using image docker.io/registry:2.8.3
	W0913 23:28:02.625390   13355 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625420   13355 retry.go:31] will retry after 203.721913ms: ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.50:22: read: connection reset by peer
	I0913 23:28:02.625979   13355 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:28:02.625996   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:28:02.626020   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.626338   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0913 23:28:02.626915   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.628248   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.628278   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.628731   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.628951   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.629689   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630408   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0913 23:28:02.630603   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.630607   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.630644   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.630752   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.630897   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.630954   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:02.631042   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.631079   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.631402   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:02.631424   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:02.631759   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:02.632088   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:02.632893   13355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:28:02.633647   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:02.634372   13355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:02.634400   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:28:02.634419   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.634983   13355 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:28:02.635926   13355 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:28:02.635944   13355 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:28:02.635961   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:02.637913   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638299   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.638327   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638429   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.638456   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.638653   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.638829   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639005   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:02.638906   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:02.639049   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:02.639088   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:02.639199   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:02.639371   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:02.639577   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:03.010874   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:28:03.011331   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:28:03.027301   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:28:03.027323   13355 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:28:03.067510   13355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:28:03.067570   13355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:28:03.088658   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:28:03.092881   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:28:03.096079   13355 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:28:03.096109   13355 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:28:03.118568   13355 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:28:03.118604   13355 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:28:03.151579   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:28:03.151606   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:28:03.163844   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:28:03.171501   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:28:03.171531   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0913 23:28:03.174545   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:28:03.174571   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:28:03.212903   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:28:03.223453   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:28:03.228572   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:28:03.228604   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:28:03.250777   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:28:03.250803   13355 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:28:03.279463   13355 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:28:03.279488   13355 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:28:03.302426   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:28:03.302459   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:28:03.319332   13355 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.319353   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:28:03.330057   13355 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.330085   13355 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0913 23:28:03.407024   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:28:03.407056   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:28:03.440023   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:28:03.440055   13355 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:28:03.479290   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:28:03.479317   13355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:28:03.491399   13355 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:28:03.491426   13355 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:28:03.520500   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:28:03.531329   13355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:28:03.531360   13355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:28:03.560362   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:28:03.703012   13355 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.703042   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:28:03.713271   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:28:03.713301   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:28:03.714632   13355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.714653   13355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:28:03.719658   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:28:03.719678   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:28:03.737269   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:28:03.737304   13355 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:28:03.889071   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:28:03.908115   13355 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:03.908155   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:28:03.918960   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:28:03.941219   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:28:03.941249   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:28:03.994232   13355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:28:03.994259   13355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:28:04.229209   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:04.267554   13355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:28:04.267577   13355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:28:04.330516   13355 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:28:04.330552   13355 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:28:04.536905   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:28:04.536936   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:28:04.590128   13355 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.590152   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:28:04.788803   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:28:04.816897   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:28:04.816931   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:28:05.234442   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:28:05.234478   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:28:05.583587   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:28:05.583614   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:28:05.923679   13355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:05.923710   13355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:28:06.123490   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.112567467s)
	I0913 23:28:06.123547   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123557   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.123855   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.123869   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.123883   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:06.123892   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:06.124216   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:06.124238   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:06.363736   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:28:07.633977   13355 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.566365758s)
	I0913 23:28:07.634011   13355 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 23:28:07.634023   13355 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.566476402s)
	I0913 23:28:07.634039   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.622680865s)
	I0913 23:28:07.634089   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634105   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634380   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634436   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.634448   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.634455   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.634784   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.634856   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.634890   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.635047   13355 node_ready.go:35] waiting up to 6m0s for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650081   13355 node_ready.go:49] node "addons-473197" has status "Ready":"True"
	I0913 23:28:07.650107   13355 node_ready.go:38] duration metric: took 15.042078ms for node "addons-473197" to be "Ready" ...
	I0913 23:28:07.650117   13355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:07.696618   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:07.988840   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.900143589s)
	I0913 23:28:07.988889   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988902   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988909   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.895998713s)
	I0913 23:28:07.988947   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.988962   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.988991   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.825104432s)
	I0913 23:28:07.989064   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.776127396s)
	I0913 23:28:07.989142   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989163   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989177   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989178   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989192   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989202   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989069   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989500   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989274   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989532   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989541   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989547   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989777   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.989817   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989833   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.989842   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989843   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989850   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989854   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:07.989856   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989864   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:07.989280   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990285   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990340   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990363   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990372   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989408   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990392   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.990409   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.990434   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.990442   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:07.989433   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:07.992583   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:07.992598   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.078646   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.078674   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.079091   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.079153   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.079168   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	W0913 23:28:08.079276   13355 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0913 23:28:08.086087   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:08.086136   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:08.086492   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:08.086562   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:08.086620   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:08.150438   13355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-473197" context rescaled to 1 replicas
	I0913 23:28:08.748384   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.748408   13355 pod_ready.go:82] duration metric: took 1.05175792s for pod "coredns-7c65d6cfc9-bfbnw" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.748418   13355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799453   13355 pod_ready.go:93] pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.799484   13355 pod_ready.go:82] duration metric: took 51.058777ms for pod "coredns-7c65d6cfc9-kx4xn" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.799510   13355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874578   13355 pod_ready.go:93] pod "etcd-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:08.874605   13355 pod_ready.go:82] duration metric: took 75.087265ms for pod "etcd-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:08.874616   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.604747   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:28:09.604789   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:09.608703   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609227   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:09.609263   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:09.609479   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:09.609669   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:09.609849   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:09.610002   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:09.882148   13355 pod_ready.go:93] pod "kube-apiserver-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.882180   13355 pod_ready.go:82] duration metric: took 1.007556164s for pod "kube-apiserver-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.882192   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894451   13355 pod_ready.go:93] pod "kube-controller-manager-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:09.894497   13355 pod_ready.go:82] duration metric: took 12.295374ms for pod "kube-controller-manager-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:09.894514   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038855   13355 pod_ready.go:93] pod "kube-proxy-vg8p5" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.038887   13355 pod_ready.go:82] duration metric: took 144.362352ms for pod "kube-proxy-vg8p5" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.038901   13355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.156523   13355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:28:10.274748   13355 addons.go:234] Setting addon gcp-auth=true in "addons-473197"
	I0913 23:28:10.274811   13355 host.go:66] Checking if "addons-473197" exists ...
	I0913 23:28:10.275129   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.275181   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.290032   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0913 23:28:10.290544   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.291078   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.291104   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.291475   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.292074   13355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:28:10.292121   13355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:28:10.306929   13355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0913 23:28:10.307597   13355 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:28:10.308136   13355 main.go:141] libmachine: Using API Version  1
	I0913 23:28:10.308165   13355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:28:10.308479   13355 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:28:10.308653   13355 main.go:141] libmachine: (addons-473197) Calling .GetState
	I0913 23:28:10.310373   13355 main.go:141] libmachine: (addons-473197) Calling .DriverName
	I0913 23:28:10.310613   13355 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:28:10.310635   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHHostname
	I0913 23:28:10.313460   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.313874   13355 main.go:141] libmachine: (addons-473197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:a5:2e", ip: ""} in network mk-addons-473197: {Iface:virbr1 ExpiryTime:2024-09-14 00:27:34 +0000 UTC Type:0 Mac:52:54:00:2d:a5:2e Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-473197 Clientid:01:52:54:00:2d:a5:2e}
	I0913 23:28:10.313918   13355 main.go:141] libmachine: (addons-473197) DBG | domain addons-473197 has defined IP address 192.168.39.50 and MAC address 52:54:00:2d:a5:2e in network mk-addons-473197
	I0913 23:28:10.314081   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHPort
	I0913 23:28:10.314245   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHKeyPath
	I0913 23:28:10.314388   13355 main.go:141] libmachine: (addons-473197) Calling .GetSSHUsername
	I0913 23:28:10.314538   13355 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/addons-473197/id_rsa Username:docker}
	I0913 23:28:10.441154   13355 pod_ready.go:93] pod "kube-scheduler-addons-473197" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:10.441189   13355 pod_ready.go:82] duration metric: took 402.279342ms for pod "kube-scheduler-addons-473197" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:10.441203   13355 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:11.038273   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.814781844s)
	I0913 23:28:11.038325   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038338   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038351   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.517814291s)
	I0913 23:28:11.038392   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038411   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038417   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.478018749s)
	I0913 23:28:11.038450   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038462   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038481   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.149383482s)
	I0913 23:28:11.038503   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038527   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.119530303s)
	I0913 23:28:11.038556   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038571   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038518   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038634   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.809394559s)
	W0913 23:28:11.038660   13355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038679   13355 retry.go:31] will retry after 183.620302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:28:11.038717   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.249875908s)
	I0913 23:28:11.038739   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038748   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038848   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038862   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038871   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038865   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.038888   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.038899   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.038910   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.038878   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039010   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039031   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039036   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039057   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039069   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039122   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039149   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039160   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039167   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039166   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039204   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039214   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039133   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039231   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039239   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.039245   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.039016   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039310   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.039467   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.039314   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039385   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039222   13355 addons.go:475] Verifying addon ingress=true in "addons-473197"
	I0913 23:28:11.039415   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.039428   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.040400   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.040410   13355 addons.go:475] Verifying addon metrics-server=true in "addons-473197"
	I0913 23:28:11.041432   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.041448   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.041458   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:11.041473   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:11.041801   13355 out.go:177] * Verifying ingress addon...
	I0913 23:28:11.042190   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042207   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042216   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.042219   13355 addons.go:475] Verifying addon registry=true in "addons-473197"
	I0913 23:28:11.042423   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042430   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:11.042439   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042443   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:11.042448   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:11.043752   13355 out.go:177] * Verifying registry addon...
	I0913 23:28:11.043754   13355 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-473197 service yakd-dashboard -n yakd-dashboard
	
	I0913 23:28:11.044788   13355 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 23:28:11.046424   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:28:11.081206   13355 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:28:11.081236   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.081287   13355 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 23:28:11.081297   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.223004   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:28:11.561874   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.562467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.057966   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.058896   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.469345   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:12.561195   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.600083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.619662   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.255869457s)
	I0913 23:28:12.619725   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619738   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.619748   13355 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.309112473s)
	I0913 23:28:12.619902   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396587931s)
	I0913 23:28:12.619956   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.619976   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620101   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620159   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620169   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620183   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620191   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620194   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620202   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620223   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:12.620230   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:12.620426   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620437   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.620437   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620447   13355 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-473197"
	I0913 23:28:12.620532   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:12.620512   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:12.620564   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:12.623355   13355 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:28:12.623358   13355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:12.625412   13355 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:28:12.626098   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:28:12.626980   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:28:12.627005   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:28:12.634155   13355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:28:12.634185   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.701404   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:28:12.701431   13355 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:28:12.784012   13355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:12.784039   13355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:28:12.826052   13355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:13.050608   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.054294   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.131996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.549130   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.550698   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.654447   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.954168   13355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.128064006s)
	I0913 23:28:13.954227   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954246   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954502   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954524   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.954543   13355 main.go:141] libmachine: Making call to close driver server
	I0913 23:28:13.954551   13355 main.go:141] libmachine: (addons-473197) Calling .Close
	I0913 23:28:13.954561   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954804   13355 main.go:141] libmachine: (addons-473197) DBG | Closing plugin on server side
	I0913 23:28:13.954864   13355 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:28:13.954887   13355 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:28:13.956609   13355 addons.go:475] Verifying addon gcp-auth=true in "addons-473197"
	I0913 23:28:13.958261   13355 out.go:177] * Verifying gcp-auth addon...
	I0913 23:28:13.960562   13355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:28:14.052223   13355 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:28:14.052254   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.137186   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.137455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.211253   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.466086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:14.550740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.552353   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.633397   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.950640   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:14.966723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.066865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.067365   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.131415   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.466378   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:15.549510   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.552396   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.632635   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.964956   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.054146   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.131263   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.464327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:16.549627   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.553008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.632296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.225129   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:17.225473   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.225716   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.226083   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.226210   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.464982   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:17.550258   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.550361   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.630780   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.964491   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.049246   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.050330   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.131607   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.464703   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:18.549790   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.550896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.631297   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.965276   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.049836   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.051294   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.131697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.447973   13355 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:19.464571   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:19.550276   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.551651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.631103   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.964917   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.049683   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.050503   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.130574   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.464865   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:20.550041   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.551487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.631097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.969748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.069252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.069792   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.132416   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.452205   13355 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:21.452227   13355 pod_ready.go:82] duration metric: took 11.011016466s for pod "nvidia-device-plugin-daemonset-vfb4s" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:21.452243   13355 pod_ready.go:39] duration metric: took 13.802114071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:21.452257   13355 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:21.452309   13355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:21.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:21.469459   13355 api_server.go:72] duration metric: took 19.043113394s to wait for apiserver process to appear ...
	I0913 23:28:21.469484   13355 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:21.469502   13355 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0913 23:28:21.474255   13355 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0913 23:28:21.475191   13355 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:21.475215   13355 api_server.go:131] duration metric: took 5.722944ms to wait for apiserver health ...
	I0913 23:28:21.475222   13355 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:21.482377   13355 system_pods.go:59] 18 kube-system pods found
	I0913 23:28:21.482406   13355 system_pods.go:61] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.482416   13355 system_pods.go:61] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.482422   13355 system_pods.go:61] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.482432   13355 system_pods.go:61] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.482439   13355 system_pods.go:61] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.482445   13355 system_pods.go:61] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.482450   13355 system_pods.go:61] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.482461   13355 system_pods.go:61] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.482472   13355 system_pods.go:61] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.482478   13355 system_pods.go:61] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.482484   13355 system_pods.go:61] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.482500   13355 system_pods.go:61] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.482510   13355 system_pods.go:61] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.482517   13355 system_pods.go:61] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.482524   13355 system_pods.go:61] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482532   13355 system_pods.go:61] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.482537   13355 system_pods.go:61] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.482547   13355 system_pods.go:61] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.482560   13355 system_pods.go:74] duration metric: took 7.331476ms to wait for pod list to return data ...
	I0913 23:28:21.482573   13355 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:21.484999   13355 default_sa.go:45] found service account: "default"
	I0913 23:28:21.485018   13355 default_sa.go:55] duration metric: took 2.439792ms for default service account to be created ...
	I0913 23:28:21.485024   13355 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:21.492239   13355 system_pods.go:86] 18 kube-system pods found
	I0913 23:28:21.492270   13355 system_pods.go:89] "coredns-7c65d6cfc9-kx4xn" [f7804727-02ec-474f-b927-f1c4b25ebc89] Running
	I0913 23:28:21.492278   13355 system_pods.go:89] "csi-hostpath-attacher-0" [b0107b78-0c42-480c-8e34-183874425dcd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:21.492304   13355 system_pods.go:89] "csi-hostpath-resizer-0" [4702d211-9a00-4c2c-8be1-9fa3a113583b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:21.492313   13355 system_pods.go:89] "csi-hostpathplugin-b8vk7" [f73ad797-356a-4442-93ce-41561df1c69e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:21.492317   13355 system_pods.go:89] "etcd-addons-473197" [e80abbef-1287-423a-9a02-307822608583] Running
	I0913 23:28:21.492322   13355 system_pods.go:89] "kube-apiserver-addons-473197" [3d5345af-6e8f-473f-a003-2319da2b81c8] Running
	I0913 23:28:21.492326   13355 system_pods.go:89] "kube-controller-manager-addons-473197" [44103129-212d-4d61-9db8-89d56eae1e01] Running
	I0913 23:28:21.492332   13355 system_pods.go:89] "kube-ingress-dns-minikube" [3db76d21-1e5d-4ece-8925-c84d0df606bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 23:28:21.492336   13355 system_pods.go:89] "kube-proxy-vg8p5" [af4c8131-921e-411d-853d-135361aa197b] Running
	I0913 23:28:21.492345   13355 system_pods.go:89] "kube-scheduler-addons-473197" [4e458740-ccbe-4f06-b2f3-f721aa78a0af] Running
	I0913 23:28:21.492354   13355 system_pods.go:89] "metrics-server-84c5f94fbc-2rwbq" [157685d1-cf53-409b-8a21-e77779bcbbd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:21.492361   13355 system_pods.go:89] "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
	I0913 23:28:21.492367   13355 system_pods.go:89] "registry-66c9cd494c-8xjqt" [7b0c1721-acbc-44f4-81ce-3918399c4448] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:21.492375   13355 system_pods.go:89] "registry-proxy-lsphw" [8031cc7e-4d9b-4151-bca2-ec5eda26c3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:21.492382   13355 system_pods.go:89] "snapshot-controller-56fcc65765-9lcg8" [ed7715dd-0396-4272-bc7f-531d103d8a9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492387   13355 system_pods.go:89] "snapshot-controller-56fcc65765-f8fq2" [3c9ad9a8-2450-4bf4-a6c6-4e2ca0026232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:21.492391   13355 system_pods.go:89] "storage-provisioner" [8268a064-fb82-447e-987d-931165d33b2d] Running
	I0913 23:28:21.492399   13355 system_pods.go:89] "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:21.492407   13355 system_pods.go:126] duration metric: took 7.377814ms to wait for k8s-apps to be running ...
	I0913 23:28:21.492417   13355 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:21.492462   13355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:21.506589   13355 system_svc.go:56] duration metric: took 14.16145ms WaitForService to wait for kubelet
	I0913 23:28:21.506620   13355 kubeadm.go:582] duration metric: took 19.080279709s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:21.506641   13355 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:21.509697   13355 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:28:21.509728   13355 node_conditions.go:123] node cpu capacity is 2
	I0913 23:28:21.509740   13355 node_conditions.go:105] duration metric: took 3.093718ms to run NodePressure ...
	I0913 23:28:21.509750   13355 start.go:241] waiting for startup goroutines ...
	I0913 23:28:21.549269   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.549838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.630759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.964996   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.066659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.066988   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.130457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.464269   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:22.550603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:22.551392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.631480   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.049834   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.050736   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.133507   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.464509   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:23.549382   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.552128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:23.631843   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.965613   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.049624   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.050338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.131212   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.464759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:24.549437   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.551097   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:24.630910   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.964175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.048277   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.050045   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.131365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.977617   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:25.978628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.978709   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.979158   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:25.981429   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.049520   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.051681   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.130220   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.464159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:26.549552   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.551222   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:26.631176   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.050910   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.052011   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.132349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.464810   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:27.549257   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.550786   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:27.630897   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.964079   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.050122   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.050142   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.151036   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.464673   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:28.549691   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.549874   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:28.630545   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.963838   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.049223   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.051589   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.131701   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.464227   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:29.549018   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.552460   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:29.631494   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.066437   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.066971   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.132136   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.464961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:30.549367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.550784   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:30.631748   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.964913   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.051008   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.051249   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.130779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.464391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:31.551575   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:31.552105   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.631630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.965632   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.101759   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.101841   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.464572   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:32.549356   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.550906   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:32.633073   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.964216   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.048975   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.050916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.131112   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.463822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:33.549425   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.550516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:33.630393   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.964336   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.048857   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.050443   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.151118   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.465096   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:34.549740   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.550620   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:34.631086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.966455   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.049659   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.050047   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.131495   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.465132   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:35.548766   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.550376   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:35.631577   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.964286   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.049062   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.050210   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.131543   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.464275   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:36.548452   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.550456   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:36.631360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.963688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.049820   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.050743   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.130637   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.464113   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:37.549304   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.550688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:37.631192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.963973   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.051608   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.051727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.133034   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.464549   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:38.559078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:38.559213   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.631291   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.050741   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.051159   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.131060   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.464822   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:39.549844   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:39.550291   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.630944   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.965248   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.048824   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.050349   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.131327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.464279   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:40.549628   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.550481   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:40.630731   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.964314   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.048937   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.050618   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.130605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.464689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:41.549726   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.550735   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:41.630990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.964388   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.048950   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.050795   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.131078   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.464031   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:42.550212   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.551605   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:42.631901   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.965017   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.049775   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.050581   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.131657   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.464727   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:43.550289   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.550580   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:43.630961   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:43.965047   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.048962   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.050171   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.131175   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.463892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:44.565475   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:44.565612   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.632466   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:44.964688   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.049299   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.050431   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.134055   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.463841   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:45.550749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.550792   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:45.631218   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:45.964803   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.049789   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.050384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.131201   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.465262   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.554496   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.555890   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:46.631739   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:46.963850   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.049818   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.051135   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.134195   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.465246   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.549517   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.550721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:47.633663   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:47.964089   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.049632   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.050325   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.131567   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.466199   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.549697   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.550894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:48.632690   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:48.964192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.049080   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.131986   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.464641   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.552164   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.554375   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:49.631764   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:49.965086   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.049392   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.050669   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.131492   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.464328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.549524   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.550434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:50.631322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:50.964441   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.049783   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.055312   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.131190   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.464922   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.550169   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:51.550221   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.631339   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:51.964457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.049661   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.051864   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.132038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.582166   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.583770   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:52.584179   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.630661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:52.964384   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.049046   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.050467   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.131202   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.464541   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.549549   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.551453   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:53.630606   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:53.964993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.050779   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.051367   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.131038   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.464444   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.549153   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.551452   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:54.848826   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:54.964836   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.050095   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.050302   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.131159   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.464360   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.564936   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:55.565447   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.666242   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:55.964847   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.049829   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.051453   13355 kapi.go:107] duration metric: took 45.005028778s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:56.131651   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.464265   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.549020   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.630993   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:56.964711   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.049527   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.132133   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.464568   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.550287   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.631088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:57.965832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.066601   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.131348   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.464693   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.551166   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:58.632041   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:58.965180   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.066338   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.131515   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.463658   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.548973   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:59.630391   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:59.964296   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.049386   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.130469   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.463737   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.549776   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:00.717623   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:00.964483   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.049274   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.131153   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:01.463888   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.549890   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:01.631219   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.255077   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.255610   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.255728   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.474419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.574193   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:02.630689   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:02.964630   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.049565   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.131380   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.464744   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.549449   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:03.630833   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:03.965101   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.048562   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.131484   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.466051   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.568692   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:04.668110   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:04.967488   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.049862   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.132252   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.464896   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.549994   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:05.630434   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:05.964526   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.065548   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.166487   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.464128   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.549947   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:06.631713   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:06.963955   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.049715   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.130974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.464504   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.550454   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:07.630666   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:07.967197   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.068388   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.168815   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.464599   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.550992   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:08.630627   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:08.966766   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.053073   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.130730   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.465025   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.567230   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:09.630516   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:09.965721   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.054440   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.130768   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:10.464306   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.548749   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:10.631327   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.276930   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.277860   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.279328   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.471697   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.582335   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:11.674829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:11.965501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.048830   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.130570   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.466419   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.553795   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:12.631061   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:12.964723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.051802   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.129998   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.465020   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.566946   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:13.632019   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:13.969250   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.050082   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.130824   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.464827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.565739   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:14.629990   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:14.974680   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.049645   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.130802   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.464723   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.567052   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:15.631421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:15.964586   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.049406   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.130916   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.465274   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.548963   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:16.630852   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:16.964129   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.048736   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.131304   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.465372   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.549339   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:17.631400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:17.964595   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.048825   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.130668   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.463994   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.550503   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:18.632529   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:18.978043   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.049954   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.131952   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.464512   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.551136   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:19.632160   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:19.964960   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.242123   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.242829   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.465827   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.550268   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:20.633322   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:20.964413   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.049949   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.132854   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.671555   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:21.673400   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.673957   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:21.963871   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.050196   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.130368   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.464308   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.549420   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:22.630664   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:22.963895   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.049709   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.150900   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.464457   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.548815   13355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:29:23.631125   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:23.976832   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.078240   13355 kapi.go:107] duration metric: took 1m13.033450728s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 23:29:24.131740   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:24.464968   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.118892   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.121603   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.131661   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.464273   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.631894   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:25.964763   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.130778   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.465365   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.630404   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:26.963974   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.131493   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.464501   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.632858   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:27.963992   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.132535   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.464106   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.633421   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:28.969206   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.132088   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.466471   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.631809   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:29.966539   13355 kapi.go:107] duration metric: took 1m16.005977096s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:29:29.967938   13355 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-473197 cluster.
	I0913 23:29:29.969110   13355 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:29:29.970285   13355 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:29:30.131386   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:30.632192   13355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:29:31.132279   13355 kapi.go:107] duration metric: took 1m18.506177888s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:29:31.134114   13355 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0913 23:29:31.135471   13355 addons.go:510] duration metric: took 1m28.709101641s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0913 23:29:31.135518   13355 start.go:246] waiting for cluster config update ...
	I0913 23:29:31.135543   13355 start.go:255] writing updated cluster config ...
	I0913 23:29:31.135825   13355 ssh_runner.go:195] Run: rm -f paused
	I0913 23:29:31.187868   13355 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:29:31.189865   13355 out.go:177] * Done! kubectl is now configured to use "addons-473197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.769053602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270981769023268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0839fb7d-08a0-4170-ba01-b67fd0fe7cdd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.769650892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fdf8209-97c2-4623-a789-0b2aa1e5a0e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.769708311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fdf8209-97c2-4623-a789-0b2aa1e5a0e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.769995032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d
50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b806377737
1005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apis
erver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,}
,Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fdf8209-97c2-4623-a789-0b2aa1e5a0e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.814334685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfd9028c-a2c8-42e7-834c-626b9ca54288 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.814419493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfd9028c-a2c8-42e7-834c-626b9ca54288 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.815867312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73b35b16-9b08-483e-bca9-04db8f45e1bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.817200242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270981817146119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73b35b16-9b08-483e-bca9-04db8f45e1bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.817787637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b4489d8-b3d5-4418-ae94-788c52409231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.817860485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b4489d8-b3d5-4418-ae94-788c52409231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.818218700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d
50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b806377737
1005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apis
erver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,}
,Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b4489d8-b3d5-4418-ae94-788c52409231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.855455808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea666a7b-2f18-418c-9b25-f4f3a6356b3e name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.855528955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea666a7b-2f18-418c-9b25-f4f3a6356b3e name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.856682094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8848fbca-2732-48d1-a9b4-aaeae0f73b66 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.857877564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270981857834372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8848fbca-2732-48d1-a9b4-aaeae0f73b66 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.858486877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=338bc143-aa7e-43ed-9742-825c1cac6c97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.858562309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=338bc143-aa7e-43ed-9742-825c1cac6c97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.858888351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d
50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b806377737
1005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apis
erver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,}
,Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=338bc143-aa7e-43ed-9742-825c1cac6c97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.898760599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9132c530-b827-44ab-87e0-52e2b17e9389 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.898835468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9132c530-b827-44ab-87e0-52e2b17e9389 name=/runtime.v1.RuntimeService/Version
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.900087055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd37476a-9315-47c8-86e1-264308c8e7bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.901766519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270981901691028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd37476a-9315-47c8-86e1-264308c8e7bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.902607046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc3a377a-f66a-4a46-bbf4-4955db39aab0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.902671898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc3a377a-f66a-4a46-bbf4-4955db39aab0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 13 23:43:01 addons-473197 crio[661]: time="2024-09-13 23:43:01.903333410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f66e484f0bcf3e9cfb43d7febc62fdc25d280b22c6a14e20aeaf2e6be9b1bd3d,PodSandboxId:a2fbf4073cbb5362bc518cd7ff0741932e619bce975fce3d6b5a14b6f13ae6f6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726270867236894173,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-jwks5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1adcfbe2-6ad2-4779-8429-55e9b080fc4c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97beb09dce981b9aaad84aded63a27cf313ace823be9f4fd978f56359881aaf7,PodSandboxId:e4292583c4fab636c95e218fb349876ab8e973722346e72611427df1691ea7d9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726270724791869685,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4959ba5e-162e-43f3-987e-5dc829126b9d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038624c91b1cde88be8104211c587cf37c428a0df49b5e48927fef72eed21189,PodSandboxId:2a8766ca0210cfea9aa3c89a556753573d097c084f35cfbee4718d4c84eb848a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1726270713116585964,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-z5dzh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299,PodSandboxId:d580ec2a8856099d1be820daf699ec90d9f4ceb2e6b2e753dc0cd4c017087a4b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726270169221241350,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-74znl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8797038b-501c-49c8-b165-7c1454b6cd59,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e992df68051b876e1ae05c0d9f14961461c22b970762232f6c9dda69010348,PodSandboxId:dc9bf0b998e05ff84c4a2031638fed1632b27b4116b14a1f6edaa1b646c50721,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1726270137121025627,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157685d1-cf53-409b-8a21-e77779bcbbd6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8804d28cfdd17cbc307538fd42df81030ef736285e8993d3432e6dc12985ab,PodSandboxId:458dcb49d1f7b1eee9b79c9716f7b620f469e2c4a5115bdc032483fb27666233,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726270128349022347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-5c8rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e1d5b7dd-422f-4d44-938e-f649701560ca,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d,PodSandboxId:b5e0a2e4aa6433763b8d18b42dd4f11026d8beb3b4b90779384a7cff5d00a752,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726270090041367668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8268a064-fb82-447e-987d-931165d33b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7,PodSandboxId:a8e55428ab3473cf848fb211c61d0da14e0ce0119c5c128946a4ae7abc094fbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726270086507718639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kx4xn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7804727-02ec-474f-b927-f1c4b25ebc89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f,PodSandboxId:f7778cd3a139f5708463f544d4d
50c4537d3db7fc51f5f63ef208e5602c00568,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726270083335367488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vg8p5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4c8131-921e-411d-853d-135361aa197b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4,PodSandboxId:c0df35fa7a5333ea6441cdf33b53930b3e456cad62cdc59ee5b806377737
1005,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726270070888851501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcd8b95a763196a2a35a097bd5eab7e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a,PodSandboxId:6a9771749e8e5322581ec09c85438fbbe056e49e1c12d26d7161c44d365f7a0c,Metadata:&ContainerMetadata{Name:kube-apis
erver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726270070880417347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12dfd4a2e36fe6cb94d70b96d2626ef4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f,PodSandboxId:555adaf092a3ac08fc9d2376f7fde2979ac90e3ae32f3e2d5abb05ab2d539d61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,}
,Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726270070855918344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8bbcdbb1c16b1ba3091d762550f625,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd,PodSandboxId:16df8b6062c13f3637ae2999adabd1897470466498876b49792ee08fb8d20e6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726270070802762562,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28dfe4944cd2af53875d3a7e7fc03c39,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc3a377a-f66a-4a46-bbf4-4955db39aab0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f66e484f0bcf3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   a2fbf4073cbb5       hello-world-app-55bf9c44b4-jwks5
	97beb09dce981       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago        Running             nginx                     0                   e4292583c4fab       nginx
	038624c91b1cd       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   4 minutes ago        Running             headlamp                  0                   2a8766ca0210c       headlamp-57fb76fcdb-z5dzh
	5196a5dc9c17b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   d580ec2a88560       gcp-auth-89d5ffd79-74znl
	04e992df68051       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Exited              metrics-server            0                   dc9bf0b998e05       metrics-server-84c5f94fbc-2rwbq
	bd8804d28cfdd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago       Running             local-path-provisioner    0                   458dcb49d1f7b       local-path-provisioner-86d989889c-5c8rt
	c9b12f34bf4ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   b5e0a2e4aa643       storage-provisioner
	d89a21338611a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   a8e55428ab347       coredns-7c65d6cfc9-kx4xn
	83331cb3777f3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago       Running             kube-proxy                0                   f7778cd3a139f       kube-proxy-vg8p5
	04477f2de3ed2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   c0df35fa7a533       etcd-addons-473197
	56e77d112c7cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   6a9771749e8e5       kube-apiserver-addons-473197
	6d8bc098317b8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   555adaf092a3a       kube-scheduler-addons-473197
	5654029eb497f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   16df8b6062c13       kube-controller-manager-addons-473197
	
	
	==> coredns [d89a21338611a53bc918db97df850487d67bddd08b4f68c8253fb4229ee888d7] <==
	[INFO] 127.0.0.1:45670 - 7126 "HINFO IN 5243104806893607912.7915310536040454133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013008283s
	[INFO] 10.244.0.7:35063 - 39937 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000380883s
	[INFO] 10.244.0.7:35063 - 43782 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014847s
	[INFO] 10.244.0.7:57829 - 35566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163865s
	[INFO] 10.244.0.7:57829 - 30448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000958s
	[INFO] 10.244.0.7:39015 - 39866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132201s
	[INFO] 10.244.0.7:39015 - 60863 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107562s
	[INFO] 10.244.0.7:58981 - 30723 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162373s
	[INFO] 10.244.0.7:58981 - 46338 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00022074s
	[INFO] 10.244.0.7:42427 - 30557 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119811s
	[INFO] 10.244.0.7:42427 - 64858 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194198s
	[INFO] 10.244.0.7:47702 - 27656 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006553s
	[INFO] 10.244.0.7:47702 - 4878 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042687s
	[INFO] 10.244.0.7:44162 - 12670 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051358s
	[INFO] 10.244.0.7:44162 - 55416 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106292s
	[INFO] 10.244.0.7:42573 - 35758 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040379s
	[INFO] 10.244.0.7:42573 - 45232 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000289788s
	[INFO] 10.244.0.22:35446 - 19101 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000568711s
	[INFO] 10.244.0.22:46347 - 39209 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000700369s
	[INFO] 10.244.0.22:55127 - 33729 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167148s
	[INFO] 10.244.0.22:59606 - 29197 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000295747s
	[INFO] 10.244.0.22:59298 - 45525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000336329s
	[INFO] 10.244.0.22:46438 - 8493 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150611s
	[INFO] 10.244.0.22:45134 - 55606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995828s
	[INFO] 10.244.0.22:56372 - 20336 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001287124s
	
	
	==> describe nodes <==
	Name:               addons-473197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-473197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-473197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-473197
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-473197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:42:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:41:32 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:41:32 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:41:32 +0000   Fri, 13 Sep 2024 23:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:41:32 +0000   Fri, 13 Sep 2024 23:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-473197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a5e8d89e8ad43a6a8c642064226a573
	  System UUID:                2a5e8d89-e8ad-43a6-a8c6-42064226a573
	  Boot ID:                    f73ad719-e78b-4b75-b596-4b22311bf8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-jwks5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  gcp-auth                    gcp-auth-89d5ffd79-74znl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  headlamp                    headlamp-57fb76fcdb-z5dzh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-7c65d6cfc9-kx4xn                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-473197                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-473197               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-473197      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-vg8p5                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-473197               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-5c8rt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-473197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-473197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-473197 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-473197 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-473197 event: Registered Node addons-473197 in Controller
	  Normal  CIDRAssignmentFailed     15m                cidrAllocator    Node addons-473197 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[ +11.767727] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.860640] kauditd_printk_skb: 4 callbacks suppressed
	[Sep13 23:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.350538] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.114970] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.355485] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.753980] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.472896] kauditd_printk_skb: 14 callbacks suppressed
	[ +24.455652] kauditd_printk_skb: 32 callbacks suppressed
	[Sep13 23:30] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 23:37] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.847903] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.069379] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.043967] kauditd_printk_skb: 10 callbacks suppressed
	[Sep13 23:38] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.853283] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.843077] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.344633] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.164878] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.255016] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.389323] kauditd_printk_skb: 19 callbacks suppressed
	[Sep13 23:41] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.399611] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [04477f2de3ed28e77a19f968e4139e4f8f6a01e0dc9e16aa3eff05614f569bd4] <==
	{"level":"info","ts":"2024-09-13T23:37:56.077320Z","caller":"traceutil/trace.go:171","msg":"trace[2140372478] linearizableReadLoop","detail":"{readStateIndex:2209; appliedIndex:2208; }","duration":"247.359784ms","start":"2024-09-13T23:37:55.829919Z","end":"2024-09-13T23:37:56.077279Z","steps":["trace[2140372478] 'read index received'  (duration: 247.248443ms)","trace[2140372478] 'applied index is now lower than readState.Index'  (duration: 110.59µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:37:56.077451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.493847ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:37:56.077484Z","caller":"traceutil/trace.go:171","msg":"trace[388607265] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2064; }","duration":"247.558448ms","start":"2024-09-13T23:37:55.829913Z","end":"2024-09-13T23:37:56.077472Z","steps":["trace[388607265] 'agreement among raft nodes before linearized reading'  (duration: 247.477707ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:56.077636Z","caller":"traceutil/trace.go:171","msg":"trace[772616711] transaction","detail":"{read_only:false; response_revision:2064; number_of_response:1; }","duration":"342.562437ms","start":"2024-09-13T23:37:55.735053Z","end":"2024-09-13T23:37:56.077616Z","steps":["trace[772616711] 'process raft request'  (duration: 342.117628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:37:56.077806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:37:55.735020Z","time spent":"342.655019ms","remote":"127.0.0.1:53072","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-13T23:38:26.271493Z","caller":"traceutil/trace.go:171","msg":"trace[2131628066] linearizableReadLoop","detail":"{readStateIndex:2494; appliedIndex:2493; }","duration":"108.567306ms","start":"2024-09-13T23:38:26.162913Z","end":"2024-09-13T23:38:26.271481Z","steps":["trace[2131628066] 'read index received'  (duration: 108.433015ms)","trace[2131628066] 'applied index is now lower than readState.Index'  (duration: 133.742µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:38:26.271587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.679598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:26.271607Z","caller":"traceutil/trace.go:171","msg":"trace[907710598] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-attacher; range_end:; response_count:0; response_revision:2337; }","duration":"108.715806ms","start":"2024-09-13T23:38:26.162886Z","end":"2024-09-13T23:38:26.271602Z","steps":["trace[907710598] 'agreement among raft nodes before linearized reading'  (duration: 108.663744ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:26.271800Z","caller":"traceutil/trace.go:171","msg":"trace[1022302903] transaction","detail":"{read_only:false; response_revision:2337; number_of_response:1; }","duration":"163.9076ms","start":"2024-09-13T23:38:26.107885Z","end":"2024-09-13T23:38:26.271793Z","steps":["trace[1022302903] 'process raft request'  (duration: 163.492838ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:38:33.012093Z","caller":"traceutil/trace.go:171","msg":"trace[1999613905] linearizableReadLoop","detail":"{readStateIndex:2536; appliedIndex:2535; }","duration":"332.084954ms","start":"2024-09-13T23:38:32.679984Z","end":"2024-09-13T23:38:33.012069Z","steps":["trace[1999613905] 'read index received'  (duration: 331.823648ms)","trace[1999613905] 'applied index is now lower than readState.Index'  (duration: 260.868µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T23:38:33.012294Z","caller":"traceutil/trace.go:171","msg":"trace[1261167858] transaction","detail":"{read_only:false; response_revision:2376; number_of_response:1; }","duration":"410.968582ms","start":"2024-09-13T23:38:32.601315Z","end":"2024-09-13T23:38:33.012284Z","steps":["trace[1261167858] 'process raft request'  (duration: 410.572548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.21368ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012477Z","caller":"traceutil/trace.go:171","msg":"trace[420653707] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2376; }","duration":"182.278804ms","start":"2024-09-13T23:38:32.830178Z","end":"2024-09-13T23:38:33.012457Z","steps":["trace[420653707] 'agreement among raft nodes before linearized reading'  (duration: 182.193813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.601291Z","time spent":"411.032114ms","remote":"127.0.0.1:52964","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2374 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-13T23:38:33.012625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.637201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:38:33.012648Z","caller":"traceutil/trace.go:171","msg":"trace[1394512193] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2376; }","duration":"332.657728ms","start":"2024-09-13T23:38:32.679980Z","end":"2024-09-13T23:38:33.012638Z","steps":["trace[1394512193] 'agreement among raft nodes before linearized reading'  (duration: 332.619348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T23:38:33.012667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T23:38:32.679948Z","time spent":"332.7162ms","remote":"127.0.0.1:52786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-13T23:39:03.932214Z","caller":"traceutil/trace.go:171","msg":"trace[187426515] linearizableReadLoop","detail":"{readStateIndex:2670; appliedIndex:2669; }","duration":"102.345322ms","start":"2024-09-13T23:39:03.829836Z","end":"2024-09-13T23:39:03.932181Z","steps":["trace[187426515] 'read index received'  (duration: 102.066171ms)","trace[187426515] 'applied index is now lower than readState.Index'  (duration: 278.498µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-13T23:39:03.932347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.485385ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T23:39:03.932376Z","caller":"traceutil/trace.go:171","msg":"trace[1119183525] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2503; }","duration":"102.540076ms","start":"2024-09-13T23:39:03.829827Z","end":"2024-09-13T23:39:03.932367Z","steps":["trace[1119183525] 'agreement among raft nodes before linearized reading'  (duration: 102.470259ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:39:03.932531Z","caller":"traceutil/trace.go:171","msg":"trace[1350711930] transaction","detail":"{read_only:false; response_revision:2503; number_of_response:1; }","duration":"117.262386ms","start":"2024-09-13T23:39:03.815262Z","end":"2024-09-13T23:39:03.932524Z","steps":["trace[1350711930] 'process raft request'  (duration: 116.68468ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:39:19.058092Z","caller":"traceutil/trace.go:171","msg":"trace[1959501530] transaction","detail":"{read_only:false; response_revision:2517; number_of_response:1; }","duration":"107.407476ms","start":"2024-09-13T23:39:18.950665Z","end":"2024-09-13T23:39:19.058072Z","steps":["trace[1959501530] 'process raft request'  (duration: 107.288226ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:42:51.896971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2056}
	{"level":"info","ts":"2024-09-13T23:42:51.920485Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2056,"took":"22.655077ms","hash":934935153,"current-db-size-bytes":6725632,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":5087232,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-13T23:42:51.920582Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":934935153,"revision":2056,"compact-revision":1557}
	
	
	==> gcp-auth [5196a5dc9c17bba7105139c34f9cf71c4a063b59488625d5a968b024353db299] <==
	2024/09/13 23:29:31 Ready to write response ...
	2024/09/13 23:37:39 Ready to marshal response ...
	2024/09/13 23:37:39 Ready to write response ...
	2024/09/13 23:37:45 Ready to marshal response ...
	2024/09/13 23:37:45 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:46 Ready to marshal response ...
	2024/09/13 23:37:46 Ready to write response ...
	2024/09/13 23:37:47 Ready to marshal response ...
	2024/09/13 23:37:47 Ready to write response ...
	2024/09/13 23:38:00 Ready to marshal response ...
	2024/09/13 23:38:00 Ready to write response ...
	2024/09/13 23:38:11 Ready to marshal response ...
	2024/09/13 23:38:11 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:24 Ready to marshal response ...
	2024/09/13 23:38:24 Ready to write response ...
	2024/09/13 23:38:40 Ready to marshal response ...
	2024/09/13 23:38:40 Ready to write response ...
	2024/09/13 23:41:04 Ready to marshal response ...
	2024/09/13 23:41:04 Ready to write response ...
	
	
	==> kernel <==
	 23:43:02 up 15 min,  0 users,  load average: 0.02, 0.28, 0.42
	Linux addons-473197 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56e77d112c7cc53581fa0db7255299178cd8fdc0d4a45656b2bb63a1a8f9144a] <==
	 > logger="UnhandledError"
	E0913 23:29:58.924676       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.102.69:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.102.69:443: connect: connection refused" logger="UnhandledError"
	E0913 23:29:58.955657       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0913 23:29:58.960975       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0913 23:38:03.472937       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0913 23:38:24.878230       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.178.54"}
	I0913 23:38:28.146928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.146968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.188882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.188920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.207934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.207989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.290379       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.290409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:28.311424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:28.311452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 23:38:29.290607       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0913 23:38:29.311717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 23:38:29.343244       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0913 23:38:35.108228       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 23:38:36.237049       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 23:38:40.565186       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 23:38:40.744026       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.251.250"}
	I0913 23:41:04.489776       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.154.138"}
	E0913 23:41:06.929046       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [5654029eb497f1d865adad0e5c5791e98c01db33a87c061a2d9b192171f82bfd] <==
	I0913 23:41:16.921930       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0913 23:41:18.729598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:18.729637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:41:19.809671       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:19.809747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:41:32.897659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-473197"
	W0913 23:41:50.630097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:50.630279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:41:54.211974       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:54.212184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:41:56.323681       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:41:56.323805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:42:13.913204       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:42:13.913287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:42:25.008287       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:42:25.008460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:42:41.415972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:42:41.416134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:42:44.004229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:42:44.004350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:43:00.685783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:43:00.685832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:43:00.806689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.923µs"
	W0913 23:43:01.692999       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:43:01.693066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [83331cb3777f3d95f15bec2f7db746a36413ef2182c29a96b4ac22b08e53656f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:28:04.380224       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:28:04.489950       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.50"]
	E0913 23:28:04.490030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:28:04.594464       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:28:04.594495       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:28:04.594519       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:28:04.603873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:28:04.604221       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:28:04.604252       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:28:04.605991       1 config.go:199] "Starting service config controller"
	I0913 23:28:04.606001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:28:04.606031       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:28:04.606036       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:28:04.618190       1 config.go:328] "Starting node config controller"
	I0913 23:28:04.618220       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:28:04.706337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:28:04.706402       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:28:04.718993       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6d8bc098317b8a50c7a5d02b8499588f15b8f3b4f30279481bb76dcb7edf1e7f] <==
	W0913 23:27:54.609234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 23:27:54.609344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.615180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:27:54.615314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.634487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:54.634695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.650017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 23:27:54.650225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.663547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 23:27:54.663702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.739538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 23:27:54.739633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 23:27:54.802534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.802606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 23:27:54.802645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:54.915039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 23:27:54.915259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.056348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.056469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.122788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.122892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:55.209039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:55.209209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 23:27:57.297586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:42:17 addons-473197 kubelet[1197]: E0913 23:42:17.620299    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270937619511270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:17 addons-473197 kubelet[1197]: E0913 23:42:17.620376    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270937619511270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:22 addons-473197 kubelet[1197]: E0913 23:42:22.969925    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a"
	Sep 13 23:42:27 addons-473197 kubelet[1197]: E0913 23:42:27.622860    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270947622370982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:27 addons-473197 kubelet[1197]: E0913 23:42:27.623199    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270947622370982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:36 addons-473197 kubelet[1197]: E0913 23:42:36.972438    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a"
	Sep 13 23:42:37 addons-473197 kubelet[1197]: E0913 23:42:37.625911    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270957625383553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:37 addons-473197 kubelet[1197]: E0913 23:42:37.625951    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270957625383553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:47 addons-473197 kubelet[1197]: E0913 23:42:47.629410    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270967629021732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:47 addons-473197 kubelet[1197]: E0913 23:42:47.629455    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270967629021732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:48 addons-473197 kubelet[1197]: E0913 23:42:48.970087    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b7a4adaf-7929-4bb9-9ec5-b24ee1a8c88a"
	Sep 13 23:42:56 addons-473197 kubelet[1197]: E0913 23:42:56.999225    1197 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 23:42:56 addons-473197 kubelet[1197]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 23:42:56 addons-473197 kubelet[1197]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 23:42:56 addons-473197 kubelet[1197]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 23:42:56 addons-473197 kubelet[1197]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 23:42:57 addons-473197 kubelet[1197]: E0913 23:42:57.633640    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270977632930544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:42:57 addons-473197 kubelet[1197]: E0913 23:42:57.633689    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726270977632930544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579801,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:43:00 addons-473197 kubelet[1197]: I0913 23:43:00.848731    1197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-jwks5" podStartSLOduration=114.524782175 podStartE2EDuration="1m56.848702448s" podCreationTimestamp="2024-09-13 23:41:04 +0000 UTC" firstStartedPulling="2024-09-13 23:41:04.897337787 +0000 UTC m=+788.058308900" lastFinishedPulling="2024-09-13 23:41:07.22125807 +0000 UTC m=+790.382229173" observedRunningTime="2024-09-13 23:41:08.215885805 +0000 UTC m=+791.376856926" watchObservedRunningTime="2024-09-13 23:43:00.848702448 +0000 UTC m=+904.009673628"
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.285490    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvsmr\" (UniqueName: \"kubernetes.io/projected/157685d1-cf53-409b-8a21-e77779bcbbd6-kube-api-access-hvsmr\") pod \"157685d1-cf53-409b-8a21-e77779bcbbd6\" (UID: \"157685d1-cf53-409b-8a21-e77779bcbbd6\") "
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.285572    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/157685d1-cf53-409b-8a21-e77779bcbbd6-tmp-dir\") pod \"157685d1-cf53-409b-8a21-e77779bcbbd6\" (UID: \"157685d1-cf53-409b-8a21-e77779bcbbd6\") "
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.285943    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157685d1-cf53-409b-8a21-e77779bcbbd6-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "157685d1-cf53-409b-8a21-e77779bcbbd6" (UID: "157685d1-cf53-409b-8a21-e77779bcbbd6"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.294723    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157685d1-cf53-409b-8a21-e77779bcbbd6-kube-api-access-hvsmr" (OuterVolumeSpecName: "kube-api-access-hvsmr") pod "157685d1-cf53-409b-8a21-e77779bcbbd6" (UID: "157685d1-cf53-409b-8a21-e77779bcbbd6"). InnerVolumeSpecName "kube-api-access-hvsmr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.386305    1197 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/157685d1-cf53-409b-8a21-e77779bcbbd6-tmp-dir\") on node \"addons-473197\" DevicePath \"\""
	Sep 13 23:43:02 addons-473197 kubelet[1197]: I0913 23:43:02.386344    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hvsmr\" (UniqueName: \"kubernetes.io/projected/157685d1-cf53-409b-8a21-e77779bcbbd6-kube-api-access-hvsmr\") on node \"addons-473197\" DevicePath \"\""
	
	
	==> storage-provisioner [c9b12f34bf4aeac7448ecddc58f0dfb943bc20b7f76c15cb1ce6710c9588496d] <==
	I0913 23:28:10.804057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:28:11.078500       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:28:11.078567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:28:11.120016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:28:11.124355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c048df4-0a4e-4b96-9f0e-8fcf6762cf64", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8 became leader
	I0913 23:28:11.124757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	I0913 23:28:11.226238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-473197_45056357-e9aa-4cb6-809a-8accb22143f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-473197 -n addons-473197
helpers_test.go:261: (dbg) Run:  kubectl --context addons-473197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-473197 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-473197 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-473197/192.168.39.50
	Start Time:       Fri, 13 Sep 2024 23:29:31 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj4pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nj4pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-473197
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m30s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (329.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 node stop m02 -v=7 --alsologtostderr
E0913 23:58:01.600443   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:58:42.562464   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:59:31.535461   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.484351993s)

                                                
                                                
-- stdout --
	* Stopping node "ha-817269-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:57:50.101246   29252 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:57:50.101399   29252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:57:50.101410   29252 out.go:358] Setting ErrFile to fd 2...
	I0913 23:57:50.101417   29252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:57:50.101597   29252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:57:50.101870   29252 mustload.go:65] Loading cluster: ha-817269
	I0913 23:57:50.102262   29252 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:57:50.102283   29252 stop.go:39] StopHost: ha-817269-m02
	I0913 23:57:50.102647   29252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:57:50.102691   29252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:57:50.117745   29252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0913 23:57:50.118179   29252 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:57:50.118711   29252 main.go:141] libmachine: Using API Version  1
	I0913 23:57:50.118737   29252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:57:50.119119   29252 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:57:50.121344   29252 out.go:177] * Stopping node "ha-817269-m02"  ...
	I0913 23:57:50.122602   29252 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0913 23:57:50.122635   29252 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:57:50.122866   29252 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0913 23:57:50.122898   29252 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:57:50.125654   29252 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:57:50.126018   29252 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:57:50.126050   29252 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:57:50.126130   29252 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:57:50.126302   29252 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:57:50.126442   29252 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:57:50.126605   29252 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:57:50.214330   29252 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0913 23:57:50.267033   29252 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0913 23:57:50.320860   29252 main.go:141] libmachine: Stopping "ha-817269-m02"...
	I0913 23:57:50.320892   29252 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0913 23:57:50.322288   29252 main.go:141] libmachine: (ha-817269-m02) Calling .Stop
	I0913 23:57:50.327252   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 0/120
	I0913 23:57:51.328503   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 1/120
	I0913 23:57:52.330358   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 2/120
	I0913 23:57:53.331536   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 3/120
	I0913 23:57:54.332806   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 4/120
	I0913 23:57:55.335841   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 5/120
	I0913 23:57:56.337188   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 6/120
	I0913 23:57:57.338469   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 7/120
	I0913 23:57:58.339758   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 8/120
	I0913 23:57:59.342147   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 9/120
	I0913 23:58:00.344305   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 10/120
	I0913 23:58:01.346688   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 11/120
	I0913 23:58:02.348292   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 12/120
	I0913 23:58:03.350251   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 13/120
	I0913 23:58:04.352031   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 14/120
	I0913 23:58:05.354164   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 15/120
	I0913 23:58:06.356131   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 16/120
	I0913 23:58:07.357473   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 17/120
	I0913 23:58:08.359292   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 18/120
	I0913 23:58:09.360583   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 19/120
	I0913 23:58:10.363245   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 20/120
	I0913 23:58:11.365005   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 21/120
	I0913 23:58:12.366498   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 22/120
	I0913 23:58:13.369401   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 23/120
	I0913 23:58:14.370647   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 24/120
	I0913 23:58:15.372980   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 25/120
	I0913 23:58:16.374467   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 26/120
	I0913 23:58:17.376051   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 27/120
	I0913 23:58:18.378161   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 28/120
	I0913 23:58:19.379977   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 29/120
	I0913 23:58:20.382148   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 30/120
	I0913 23:58:21.385260   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 31/120
	I0913 23:58:22.386837   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 32/120
	I0913 23:58:23.388138   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 33/120
	I0913 23:58:24.390283   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 34/120
	I0913 23:58:25.392691   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 35/120
	I0913 23:58:26.395273   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 36/120
	I0913 23:58:27.396677   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 37/120
	I0913 23:58:28.398264   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 38/120
	I0913 23:58:29.400410   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 39/120
	I0913 23:58:30.402629   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 40/120
	I0913 23:58:31.404108   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 41/120
	I0913 23:58:32.406390   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 42/120
	I0913 23:58:33.408402   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 43/120
	I0913 23:58:34.410536   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 44/120
	I0913 23:58:35.412547   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 45/120
	I0913 23:58:36.414574   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 46/120
	I0913 23:58:37.416067   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 47/120
	I0913 23:58:38.418690   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 48/120
	I0913 23:58:39.420703   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 49/120
	I0913 23:58:40.422907   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 50/120
	I0913 23:58:41.424286   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 51/120
	I0913 23:58:42.426023   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 52/120
	I0913 23:58:43.427674   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 53/120
	I0913 23:58:44.429883   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 54/120
	I0913 23:58:45.431942   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 55/120
	I0913 23:58:46.433476   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 56/120
	I0913 23:58:47.434903   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 57/120
	I0913 23:58:48.436385   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 58/120
	I0913 23:58:49.438241   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 59/120
	I0913 23:58:50.440412   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 60/120
	I0913 23:58:51.443127   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 61/120
	I0913 23:58:52.444786   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 62/120
	I0913 23:58:53.446298   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 63/120
	I0913 23:58:54.447698   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 64/120
	I0913 23:58:55.449366   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 65/120
	I0913 23:58:56.450870   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 66/120
	I0913 23:58:57.452591   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 67/120
	I0913 23:58:58.454929   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 68/120
	I0913 23:58:59.456949   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 69/120
	I0913 23:59:00.459359   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 70/120
	I0913 23:59:01.461370   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 71/120
	I0913 23:59:02.462979   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 72/120
	I0913 23:59:03.465571   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 73/120
	I0913 23:59:04.466941   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 74/120
	I0913 23:59:05.469035   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 75/120
	I0913 23:59:06.470307   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 76/120
	I0913 23:59:07.472707   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 77/120
	I0913 23:59:08.474068   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 78/120
	I0913 23:59:09.476208   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 79/120
	I0913 23:59:10.478194   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 80/120
	I0913 23:59:11.479514   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 81/120
	I0913 23:59:12.480708   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 82/120
	I0913 23:59:13.482143   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 83/120
	I0913 23:59:14.483506   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 84/120
	I0913 23:59:15.486378   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 85/120
	I0913 23:59:16.487569   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 86/120
	I0913 23:59:17.489166   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 87/120
	I0913 23:59:18.490522   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 88/120
	I0913 23:59:19.491755   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 89/120
	I0913 23:59:20.493577   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 90/120
	I0913 23:59:21.494853   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 91/120
	I0913 23:59:22.496050   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 92/120
	I0913 23:59:23.498342   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 93/120
	I0913 23:59:24.500558   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 94/120
	I0913 23:59:25.501913   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 95/120
	I0913 23:59:26.503170   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 96/120
	I0913 23:59:27.504688   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 97/120
	I0913 23:59:28.506033   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 98/120
	I0913 23:59:29.507340   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 99/120
	I0913 23:59:30.509726   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 100/120
	I0913 23:59:31.511009   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 101/120
	I0913 23:59:32.512605   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 102/120
	I0913 23:59:33.514001   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 103/120
	I0913 23:59:34.515503   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 104/120
	I0913 23:59:35.517729   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 105/120
	I0913 23:59:36.520134   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 106/120
	I0913 23:59:37.521428   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 107/120
	I0913 23:59:38.522903   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 108/120
	I0913 23:59:39.524276   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 109/120
	I0913 23:59:40.527358   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 110/120
	I0913 23:59:41.528915   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 111/120
	I0913 23:59:42.530240   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 112/120
	I0913 23:59:43.532328   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 113/120
	I0913 23:59:44.533825   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 114/120
	I0913 23:59:45.535746   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 115/120
	I0913 23:59:46.537146   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 116/120
	I0913 23:59:47.538477   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 117/120
	I0913 23:59:48.540001   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 118/120
	I0913 23:59:49.541456   29252 main.go:141] libmachine: (ha-817269-m02) Waiting for machine to stop 119/120
	I0913 23:59:50.542710   29252 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0913 23:59:50.542842   29252 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-817269 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
E0914 00:00:04.484504   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (19.235633355s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:59:50.585366   29674 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:59:50.585611   29674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:59:50.585621   29674 out.go:358] Setting ErrFile to fd 2...
	I0913 23:59:50.585625   29674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:59:50.585805   29674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:59:50.585963   29674 out.go:352] Setting JSON to false
	I0913 23:59:50.585994   29674 mustload.go:65] Loading cluster: ha-817269
	I0913 23:59:50.586041   29674 notify.go:220] Checking for updates...
	I0913 23:59:50.586556   29674 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:59:50.586578   29674 status.go:255] checking status of ha-817269 ...
	I0913 23:59:50.587093   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.587139   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.605770   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0913 23:59:50.606258   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.606918   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.606938   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.607283   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.607469   29674 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:59:50.609225   29674 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0913 23:59:50.609243   29674 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:59:50.609638   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.609675   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.624765   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0913 23:59:50.625298   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.625838   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.625861   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.626199   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.626474   29674 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:59:50.629045   29674 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:59:50.629491   29674 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:59:50.629507   29674 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:59:50.629640   29674 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:59:50.629944   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.629984   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.645879   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0913 23:59:50.646451   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.646965   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.646991   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.647368   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.647564   29674 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:59:50.647743   29674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:59:50.647770   29674 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:59:50.650249   29674 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:59:50.650635   29674 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:59:50.650662   29674 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:59:50.650891   29674 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:59:50.651111   29674 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:59:50.651277   29674 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:59:50.651425   29674 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:59:50.736460   29674 ssh_runner.go:195] Run: systemctl --version
	I0913 23:59:50.743765   29674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:59:50.761137   29674 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0913 23:59:50.761170   29674 api_server.go:166] Checking apiserver status ...
	I0913 23:59:50.761206   29674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:59:50.776581   29674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0913 23:59:50.786155   29674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 23:59:50.786221   29674 ssh_runner.go:195] Run: ls
	I0913 23:59:50.790208   29674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 23:59:50.794339   29674 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 23:59:50.794362   29674 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0913 23:59:50.794371   29674 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:59:50.794386   29674 status.go:255] checking status of ha-817269-m02 ...
	I0913 23:59:50.794670   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.794703   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.809370   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0913 23:59:50.809890   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.810362   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.810396   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.810751   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.810921   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0913 23:59:50.812724   29674 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0913 23:59:50.812742   29674 host.go:66] Checking if "ha-817269-m02" exists ...
	I0913 23:59:50.813034   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.813069   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.828629   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0913 23:59:50.829149   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.829834   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.829871   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.830258   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.830460   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:59:50.833018   29674 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:59:50.833387   29674 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:59:50.833426   29674 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:59:50.833541   29674 host.go:66] Checking if "ha-817269-m02" exists ...
	I0913 23:59:50.833815   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:59:50.833864   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:59:50.848791   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38527
	I0913 23:59:50.849279   29674 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:59:50.849771   29674 main.go:141] libmachine: Using API Version  1
	I0913 23:59:50.849800   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:59:50.850147   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:59:50.850307   29674 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:59:50.850476   29674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:59:50.850494   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:59:50.854042   29674 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:59:50.854453   29674 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:59:50.854481   29674 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:59:50.854661   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:59:50.854811   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:59:50.854978   29674 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:59:50.855084   29674 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:09.420006   29674 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:09.420109   29674 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:09.420125   29674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:09.420137   29674 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:09.420155   29674 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:09.420162   29674 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:09.420451   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.420509   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.436368   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0914 00:00:09.436878   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.437368   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.437390   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.437725   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.437893   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:09.439519   29674 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:09.439535   29674 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:09.439926   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.439970   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.456086   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42857
	I0914 00:00:09.456522   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.456989   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.457005   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.457293   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.457472   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:09.460607   29674 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:09.461070   29674 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:09.461093   29674 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:09.461259   29674 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:09.461558   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.461595   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.476160   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0914 00:00:09.476620   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.477094   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.477115   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.477449   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.477639   29674 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:09.477804   29674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:09.477823   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:09.480705   29674 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:09.481196   29674 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:09.481222   29674 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:09.481344   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:09.481514   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:09.481662   29674 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:09.481766   29674 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:09.565130   29674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:09.584367   29674 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:09.584398   29674 api_server.go:166] Checking apiserver status ...
	I0914 00:00:09.584441   29674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:09.599255   29674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:09.609364   29674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:09.609422   29674 ssh_runner.go:195] Run: ls
	I0914 00:00:09.613626   29674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:09.619843   29674 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:09.619871   29674 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:09.619882   29674 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:09.619901   29674 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:09.620301   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.620340   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.635277   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0914 00:00:09.635731   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.636271   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.636290   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.636634   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.636839   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:09.638467   29674 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:09.638482   29674 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:09.638758   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.638790   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.653253   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0914 00:00:09.653768   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.654250   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.654286   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.654608   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.654788   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:09.657730   29674 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:09.658163   29674 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:09.658202   29674 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:09.658334   29674 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:09.658702   29674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:09.658737   29674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:09.673459   29674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0914 00:00:09.673922   29674 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:09.674484   29674 main.go:141] libmachine: Using API Version  1
	I0914 00:00:09.674505   29674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:09.674805   29674 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:09.674988   29674 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:09.675183   29674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:09.675202   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:09.678141   29674 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:09.678554   29674 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:09.678581   29674 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:09.678697   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:09.678871   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:09.679004   29674 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:09.679150   29674 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:09.760332   29674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:09.777909   29674 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-817269 -n ha-817269
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-817269 logs -n 25: (1.381946225s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m03_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m04 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp testdata/cp-test.txt                                               | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m04_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03:/home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m03 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-817269 node stop m02 -v=7                                                    | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:53:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:53:10.992229   25213 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:53:10.992351   25213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:10.992359   25213 out.go:358] Setting ErrFile to fd 2...
	I0913 23:53:10.992364   25213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:10.992582   25213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:53:10.993182   25213 out.go:352] Setting JSON to false
	I0913 23:53:10.994007   25213 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2137,"bootTime":1726269454,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:53:10.994114   25213 start.go:139] virtualization: kvm guest
	I0913 23:53:10.996352   25213 out.go:177] * [ha-817269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:53:10.997878   25213 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:53:10.997885   25213 notify.go:220] Checking for updates...
	I0913 23:53:11.000664   25213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:53:11.001976   25213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:11.003286   25213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.004578   25213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:53:11.005770   25213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:53:11.007008   25213 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:53:11.043705   25213 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 23:53:11.045285   25213 start.go:297] selected driver: kvm2
	I0913 23:53:11.045307   25213 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:53:11.045322   25213 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:53:11.046039   25213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:53:11.046135   25213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:53:11.062537   25213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:53:11.062601   25213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:53:11.062838   25213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:53:11.062868   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:11.062912   25213 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 23:53:11.062918   25213 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 23:53:11.062975   25213 start.go:340] cluster config:
	{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0913 23:53:11.063101   25213 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:53:11.065303   25213 out.go:177] * Starting "ha-817269" primary control-plane node in "ha-817269" cluster
	I0913 23:53:11.066558   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:11.066607   25213 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:53:11.066629   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:53:11.066745   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:53:11.066759   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:53:11.067057   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:11.067078   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json: {Name:mk941005e99ea2467f0024292cb50e3b0a4dc797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:11.067247   25213 start.go:360] acquireMachinesLock for ha-817269: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:53:11.067307   25213 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "ha-817269"
	I0913 23:53:11.067333   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:11.067393   25213 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 23:53:11.069056   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:53:11.069210   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:11.069254   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:11.084868   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0913 23:53:11.085427   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:11.086011   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:11.086031   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:11.086441   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:11.086625   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:11.086765   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:11.086927   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:53:11.086958   25213 client.go:168] LocalClient.Create starting
	I0913 23:53:11.086997   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:53:11.087038   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:11.087055   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:11.087115   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:53:11.087141   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:11.087157   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:11.087178   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:53:11.087188   25213 main.go:141] libmachine: (ha-817269) Calling .PreCreateCheck
	I0913 23:53:11.087510   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:11.088023   25213 main.go:141] libmachine: Creating machine...
	I0913 23:53:11.088037   25213 main.go:141] libmachine: (ha-817269) Calling .Create
	I0913 23:53:11.088224   25213 main.go:141] libmachine: (ha-817269) Creating KVM machine...
	I0913 23:53:11.089691   25213 main.go:141] libmachine: (ha-817269) DBG | found existing default KVM network
	I0913 23:53:11.090458   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.090231   25236 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0913 23:53:11.090491   25213 main.go:141] libmachine: (ha-817269) DBG | created network xml: 
	I0913 23:53:11.090503   25213 main.go:141] libmachine: (ha-817269) DBG | <network>
	I0913 23:53:11.090521   25213 main.go:141] libmachine: (ha-817269) DBG |   <name>mk-ha-817269</name>
	I0913 23:53:11.090538   25213 main.go:141] libmachine: (ha-817269) DBG |   <dns enable='no'/>
	I0913 23:53:11.090549   25213 main.go:141] libmachine: (ha-817269) DBG |   
	I0913 23:53:11.090556   25213 main.go:141] libmachine: (ha-817269) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 23:53:11.090559   25213 main.go:141] libmachine: (ha-817269) DBG |     <dhcp>
	I0913 23:53:11.090565   25213 main.go:141] libmachine: (ha-817269) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 23:53:11.090572   25213 main.go:141] libmachine: (ha-817269) DBG |     </dhcp>
	I0913 23:53:11.090581   25213 main.go:141] libmachine: (ha-817269) DBG |   </ip>
	I0913 23:53:11.090586   25213 main.go:141] libmachine: (ha-817269) DBG |   
	I0913 23:53:11.090593   25213 main.go:141] libmachine: (ha-817269) DBG | </network>
	I0913 23:53:11.090599   25213 main.go:141] libmachine: (ha-817269) DBG | 
	I0913 23:53:11.095940   25213 main.go:141] libmachine: (ha-817269) DBG | trying to create private KVM network mk-ha-817269 192.168.39.0/24...
	I0913 23:53:11.163359   25213 main.go:141] libmachine: (ha-817269) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 ...
	I0913 23:53:11.163415   25213 main.go:141] libmachine: (ha-817269) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:53:11.163429   25213 main.go:141] libmachine: (ha-817269) DBG | private KVM network mk-ha-817269 192.168.39.0/24 created
	I0913 23:53:11.163449   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.163328   25236 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.163475   25213 main.go:141] libmachine: (ha-817269) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:53:11.414995   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.414842   25236 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa...
	I0913 23:53:11.595971   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.595821   25236 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/ha-817269.rawdisk...
	I0913 23:53:11.596001   25213 main.go:141] libmachine: (ha-817269) DBG | Writing magic tar header
	I0913 23:53:11.596011   25213 main.go:141] libmachine: (ha-817269) DBG | Writing SSH key tar header
	I0913 23:53:11.596018   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.595948   25236 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 ...
	I0913 23:53:11.596100   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269
	I0913 23:53:11.596126   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 (perms=drwx------)
	I0913 23:53:11.596136   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:53:11.596152   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:53:11.596167   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:53:11.596181   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:53:11.596198   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:53:11.596208   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:53:11.596219   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.596234   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:53:11.596243   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:53:11.596254   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:53:11.596263   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home
	I0913 23:53:11.596289   25213 main.go:141] libmachine: (ha-817269) DBG | Skipping /home - not owner
	I0913 23:53:11.596303   25213 main.go:141] libmachine: (ha-817269) Creating domain...
	I0913 23:53:11.597341   25213 main.go:141] libmachine: (ha-817269) define libvirt domain using xml: 
	I0913 23:53:11.597364   25213 main.go:141] libmachine: (ha-817269) <domain type='kvm'>
	I0913 23:53:11.597374   25213 main.go:141] libmachine: (ha-817269)   <name>ha-817269</name>
	I0913 23:53:11.597381   25213 main.go:141] libmachine: (ha-817269)   <memory unit='MiB'>2200</memory>
	I0913 23:53:11.597389   25213 main.go:141] libmachine: (ha-817269)   <vcpu>2</vcpu>
	I0913 23:53:11.597395   25213 main.go:141] libmachine: (ha-817269)   <features>
	I0913 23:53:11.597403   25213 main.go:141] libmachine: (ha-817269)     <acpi/>
	I0913 23:53:11.597409   25213 main.go:141] libmachine: (ha-817269)     <apic/>
	I0913 23:53:11.597415   25213 main.go:141] libmachine: (ha-817269)     <pae/>
	I0913 23:53:11.597429   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597437   25213 main.go:141] libmachine: (ha-817269)   </features>
	I0913 23:53:11.597441   25213 main.go:141] libmachine: (ha-817269)   <cpu mode='host-passthrough'>
	I0913 23:53:11.597445   25213 main.go:141] libmachine: (ha-817269)   
	I0913 23:53:11.597451   25213 main.go:141] libmachine: (ha-817269)   </cpu>
	I0913 23:53:11.597481   25213 main.go:141] libmachine: (ha-817269)   <os>
	I0913 23:53:11.597515   25213 main.go:141] libmachine: (ha-817269)     <type>hvm</type>
	I0913 23:53:11.597528   25213 main.go:141] libmachine: (ha-817269)     <boot dev='cdrom'/>
	I0913 23:53:11.597539   25213 main.go:141] libmachine: (ha-817269)     <boot dev='hd'/>
	I0913 23:53:11.597573   25213 main.go:141] libmachine: (ha-817269)     <bootmenu enable='no'/>
	I0913 23:53:11.597590   25213 main.go:141] libmachine: (ha-817269)   </os>
	I0913 23:53:11.597596   25213 main.go:141] libmachine: (ha-817269)   <devices>
	I0913 23:53:11.597605   25213 main.go:141] libmachine: (ha-817269)     <disk type='file' device='cdrom'>
	I0913 23:53:11.597615   25213 main.go:141] libmachine: (ha-817269)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/boot2docker.iso'/>
	I0913 23:53:11.597622   25213 main.go:141] libmachine: (ha-817269)       <target dev='hdc' bus='scsi'/>
	I0913 23:53:11.597627   25213 main.go:141] libmachine: (ha-817269)       <readonly/>
	I0913 23:53:11.597634   25213 main.go:141] libmachine: (ha-817269)     </disk>
	I0913 23:53:11.597640   25213 main.go:141] libmachine: (ha-817269)     <disk type='file' device='disk'>
	I0913 23:53:11.597648   25213 main.go:141] libmachine: (ha-817269)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:53:11.597655   25213 main.go:141] libmachine: (ha-817269)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/ha-817269.rawdisk'/>
	I0913 23:53:11.597662   25213 main.go:141] libmachine: (ha-817269)       <target dev='hda' bus='virtio'/>
	I0913 23:53:11.597667   25213 main.go:141] libmachine: (ha-817269)     </disk>
	I0913 23:53:11.597673   25213 main.go:141] libmachine: (ha-817269)     <interface type='network'>
	I0913 23:53:11.597678   25213 main.go:141] libmachine: (ha-817269)       <source network='mk-ha-817269'/>
	I0913 23:53:11.597686   25213 main.go:141] libmachine: (ha-817269)       <model type='virtio'/>
	I0913 23:53:11.597695   25213 main.go:141] libmachine: (ha-817269)     </interface>
	I0913 23:53:11.597704   25213 main.go:141] libmachine: (ha-817269)     <interface type='network'>
	I0913 23:53:11.597712   25213 main.go:141] libmachine: (ha-817269)       <source network='default'/>
	I0913 23:53:11.597716   25213 main.go:141] libmachine: (ha-817269)       <model type='virtio'/>
	I0913 23:53:11.597722   25213 main.go:141] libmachine: (ha-817269)     </interface>
	I0913 23:53:11.597732   25213 main.go:141] libmachine: (ha-817269)     <serial type='pty'>
	I0913 23:53:11.597740   25213 main.go:141] libmachine: (ha-817269)       <target port='0'/>
	I0913 23:53:11.597744   25213 main.go:141] libmachine: (ha-817269)     </serial>
	I0913 23:53:11.597751   25213 main.go:141] libmachine: (ha-817269)     <console type='pty'>
	I0913 23:53:11.597755   25213 main.go:141] libmachine: (ha-817269)       <target type='serial' port='0'/>
	I0913 23:53:11.597763   25213 main.go:141] libmachine: (ha-817269)     </console>
	I0913 23:53:11.597769   25213 main.go:141] libmachine: (ha-817269)     <rng model='virtio'>
	I0913 23:53:11.597775   25213 main.go:141] libmachine: (ha-817269)       <backend model='random'>/dev/random</backend>
	I0913 23:53:11.597784   25213 main.go:141] libmachine: (ha-817269)     </rng>
	I0913 23:53:11.597791   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597798   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597805   25213 main.go:141] libmachine: (ha-817269)   </devices>
	I0913 23:53:11.597810   25213 main.go:141] libmachine: (ha-817269) </domain>
	I0913 23:53:11.597839   25213 main.go:141] libmachine: (ha-817269) 
	I0913 23:53:11.602075   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:d3:1c:ae in network default
	I0913 23:53:11.602702   25213 main.go:141] libmachine: (ha-817269) Ensuring networks are active...
	I0913 23:53:11.602745   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:11.603403   25213 main.go:141] libmachine: (ha-817269) Ensuring network default is active
	I0913 23:53:11.603718   25213 main.go:141] libmachine: (ha-817269) Ensuring network mk-ha-817269 is active
	I0913 23:53:11.604222   25213 main.go:141] libmachine: (ha-817269) Getting domain xml...
	I0913 23:53:11.604841   25213 main.go:141] libmachine: (ha-817269) Creating domain...
	I0913 23:53:12.819930   25213 main.go:141] libmachine: (ha-817269) Waiting to get IP...
	I0913 23:53:12.820703   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:12.821050   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:12.821108   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:12.821048   25236 retry.go:31] will retry after 252.038906ms: waiting for machine to come up
	I0913 23:53:13.074756   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.075365   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.075410   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.075309   25236 retry.go:31] will retry after 321.284859ms: waiting for machine to come up
	I0913 23:53:13.397726   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.398219   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.398243   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.398178   25236 retry.go:31] will retry after 348.399027ms: waiting for machine to come up
	I0913 23:53:13.747829   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.748247   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.748273   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.748201   25236 retry.go:31] will retry after 543.035066ms: waiting for machine to come up
	I0913 23:53:14.292901   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:14.293240   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:14.293266   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:14.293195   25236 retry.go:31] will retry after 627.458273ms: waiting for machine to come up
	I0913 23:53:14.922074   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:14.922439   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:14.922464   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:14.922402   25236 retry.go:31] will retry after 789.588185ms: waiting for machine to come up
	I0913 23:53:15.713440   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:15.713822   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:15.713870   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:15.713783   25236 retry.go:31] will retry after 845.063121ms: waiting for machine to come up
	I0913 23:53:16.560626   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:16.561178   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:16.561209   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:16.561142   25236 retry.go:31] will retry after 912.014634ms: waiting for machine to come up
	I0913 23:53:17.474565   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:17.475469   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:17.475500   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:17.475397   25236 retry.go:31] will retry after 1.824124091s: waiting for machine to come up
	I0913 23:53:19.301655   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:19.302297   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:19.302340   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:19.302247   25236 retry.go:31] will retry after 1.738487929s: waiting for machine to come up
	I0913 23:53:21.043153   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:21.043854   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:21.043884   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:21.043772   25236 retry.go:31] will retry after 2.838460047s: waiting for machine to come up
	I0913 23:53:23.885578   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:23.885976   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:23.886006   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:23.885922   25236 retry.go:31] will retry after 2.769913011s: waiting for machine to come up
	I0913 23:53:26.657329   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:26.657688   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:26.657713   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:26.657642   25236 retry.go:31] will retry after 4.533163335s: waiting for machine to come up
	I0913 23:53:31.192391   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.192864   25213 main.go:141] libmachine: (ha-817269) Found IP for machine: 192.168.39.132
	I0913 23:53:31.192885   25213 main.go:141] libmachine: (ha-817269) Reserving static IP address...
	I0913 23:53:31.192892   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has current primary IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.193278   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find host DHCP lease matching {name: "ha-817269", mac: "52:54:00:ff:63:b0", ip: "192.168.39.132"} in network mk-ha-817269
	I0913 23:53:31.264589   25213 main.go:141] libmachine: (ha-817269) Reserved static IP address: 192.168.39.132
	I0913 23:53:31.264621   25213 main.go:141] libmachine: (ha-817269) DBG | Getting to WaitForSSH function...
	I0913 23:53:31.264628   25213 main.go:141] libmachine: (ha-817269) Waiting for SSH to be available...
	I0913 23:53:31.267119   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.267713   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269
	I0913 23:53:31.267739   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find defined IP address of network mk-ha-817269 interface with MAC address 52:54:00:ff:63:b0
	I0913 23:53:31.268014   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH client type: external
	I0913 23:53:31.268039   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa (-rw-------)
	I0913 23:53:31.268086   25213 main.go:141] libmachine: (ha-817269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:53:31.268107   25213 main.go:141] libmachine: (ha-817269) DBG | About to run SSH command:
	I0913 23:53:31.268117   25213 main.go:141] libmachine: (ha-817269) DBG | exit 0
	I0913 23:53:31.271648   25213 main.go:141] libmachine: (ha-817269) DBG | SSH cmd err, output: exit status 255: 
	I0913 23:53:31.271671   25213 main.go:141] libmachine: (ha-817269) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 23:53:31.271680   25213 main.go:141] libmachine: (ha-817269) DBG | command : exit 0
	I0913 23:53:31.271686   25213 main.go:141] libmachine: (ha-817269) DBG | err     : exit status 255
	I0913 23:53:31.271707   25213 main.go:141] libmachine: (ha-817269) DBG | output  : 
	I0913 23:53:34.273846   25213 main.go:141] libmachine: (ha-817269) DBG | Getting to WaitForSSH function...
	I0913 23:53:34.276075   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.276457   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.276487   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.276586   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH client type: external
	I0913 23:53:34.276614   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa (-rw-------)
	I0913 23:53:34.276662   25213 main.go:141] libmachine: (ha-817269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:53:34.276679   25213 main.go:141] libmachine: (ha-817269) DBG | About to run SSH command:
	I0913 23:53:34.276690   25213 main.go:141] libmachine: (ha-817269) DBG | exit 0
	I0913 23:53:34.403956   25213 main.go:141] libmachine: (ha-817269) DBG | SSH cmd err, output: <nil>: 
	I0913 23:53:34.404198   25213 main.go:141] libmachine: (ha-817269) KVM machine creation complete!
	I0913 23:53:34.404539   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:34.405266   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:34.405464   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:34.405588   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:53:34.405602   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:34.406773   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:53:34.406791   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:53:34.406807   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:53:34.406818   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.408795   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.409115   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.409151   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.409322   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.409481   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.409603   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.409716   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.409897   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.410072   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.410087   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:53:34.519177   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:53:34.519197   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:53:34.519204   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.521830   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.522248   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.522280   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.522421   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.522611   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.522799   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.522891   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.523011   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.523208   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.523226   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:53:34.632549   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:53:34.632637   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:53:34.632644   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:53:34.632652   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.632870   25213 buildroot.go:166] provisioning hostname "ha-817269"
	I0913 23:53:34.632894   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.633088   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.635824   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.636183   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.636209   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.636399   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.636546   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.636680   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.636783   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.636900   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.637092   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.637105   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269 && echo "ha-817269" | sudo tee /etc/hostname
	I0913 23:53:34.758071   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0913 23:53:34.758099   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.761001   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.761542   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.761573   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.761733   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.761956   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.762123   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.762254   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.762386   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.762570   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.762586   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:53:34.882167   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:53:34.882200   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:53:34.882236   25213 buildroot.go:174] setting up certificates
	I0913 23:53:34.882252   25213 provision.go:84] configureAuth start
	I0913 23:53:34.882263   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.882558   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:34.885983   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.886447   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.886476   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.886647   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.889068   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.889616   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.889647   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.889744   25213 provision.go:143] copyHostCerts
	I0913 23:53:34.889790   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:53:34.889826   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:53:34.889833   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:53:34.889909   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:53:34.889993   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:53:34.890014   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:53:34.890021   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:53:34.890050   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:53:34.890089   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:53:34.890105   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:53:34.890111   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:53:34.890135   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:53:34.890178   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269 san=[127.0.0.1 192.168.39.132 ha-817269 localhost minikube]
	I0913 23:53:34.960122   25213 provision.go:177] copyRemoteCerts
	I0913 23:53:34.960189   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:53:34.960211   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.963549   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.964287   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.964320   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.964474   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.964703   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.964867   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.965025   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.050215   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:53:35.050281   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:53:35.076921   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:53:35.077000   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 23:53:35.102757   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:53:35.102863   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:53:35.127477   25213 provision.go:87] duration metric: took 245.211667ms to configureAuth
	I0913 23:53:35.127513   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:53:35.127714   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:35.127813   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.130425   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.130728   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.130749   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.131038   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.131252   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.131422   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.131547   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.131689   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:35.131908   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:35.131926   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:53:35.358185   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:53:35.358236   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:53:35.358245   25213 main.go:141] libmachine: (ha-817269) Calling .GetURL
	I0913 23:53:35.359953   25213 main.go:141] libmachine: (ha-817269) DBG | Using libvirt version 6000000
	I0913 23:53:35.362538   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.362813   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.362837   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.363009   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:53:35.363061   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:53:35.363074   25213 client.go:171] duration metric: took 24.276108937s to LocalClient.Create
	I0913 23:53:35.363096   25213 start.go:167] duration metric: took 24.276170063s to libmachine.API.Create "ha-817269"
	I0913 23:53:35.363107   25213 start.go:293] postStartSetup for "ha-817269" (driver="kvm2")
	I0913 23:53:35.363122   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:53:35.363145   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.363425   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:53:35.363461   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.366068   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.366439   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.366467   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.366579   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.366792   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.366925   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.367069   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.454158   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:53:35.458934   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:53:35.458963   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:53:35.459029   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:53:35.459121   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:53:35.459134   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:53:35.459254   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:53:35.469014   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:53:35.493957   25213 start.go:296] duration metric: took 130.832596ms for postStartSetup
	I0913 23:53:35.494005   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:35.494587   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:35.497099   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.497422   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.497460   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.497776   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:35.498033   25213 start.go:128] duration metric: took 24.430628809s to createHost
	I0913 23:53:35.498060   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.500703   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.501122   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.501174   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.501414   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.501616   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.501837   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.501983   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.502126   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:35.502312   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:35.502323   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:53:35.612315   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271615.591239485
	
	I0913 23:53:35.612338   25213 fix.go:216] guest clock: 1726271615.591239485
	I0913 23:53:35.612345   25213 fix.go:229] Guest: 2024-09-13 23:53:35.591239485 +0000 UTC Remote: 2024-09-13 23:53:35.498047714 +0000 UTC m=+24.541264704 (delta=93.191771ms)
	I0913 23:53:35.612379   25213 fix.go:200] guest clock delta is within tolerance: 93.191771ms
	I0913 23:53:35.612393   25213 start.go:83] releasing machines lock for "ha-817269", held for 24.545066092s
	I0913 23:53:35.612414   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.612654   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:35.614972   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.615244   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.615274   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.615432   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.615990   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.616142   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.616256   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:53:35.616308   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.616349   25213 ssh_runner.go:195] Run: cat /version.json
	I0913 23:53:35.616368   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.618751   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.618958   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619096   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.619121   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619299   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.619370   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.619398   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619603   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.619615   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.619809   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.619822   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.620039   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.620061   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.620201   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.732907   25213 ssh_runner.go:195] Run: systemctl --version
	I0913 23:53:35.738704   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:53:35.911858   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:53:35.917837   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:53:35.917904   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:53:35.933787   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:53:35.933817   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:53:35.933876   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:53:35.948182   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:53:35.963525   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:53:35.963578   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:53:35.976683   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:53:35.990088   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:53:36.107297   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:53:36.255450   25213 docker.go:233] disabling docker service ...
	I0913 23:53:36.255511   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:53:36.272033   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:53:36.285254   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:53:36.401144   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:53:36.513483   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:53:36.527278   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:53:36.545449   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:53:36.545504   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.556091   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:53:36.556150   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.566368   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.576307   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.586278   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:53:36.596436   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.606372   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.622740   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.632542   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:53:36.641527   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:53:36.641603   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:53:36.654880   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:53:36.663948   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:53:36.776355   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:53:36.864463   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:53:36.864547   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:53:36.868817   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:53:36.868871   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:53:36.872311   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:53:36.914551   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:53:36.914633   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:53:36.941104   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:53:36.971114   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:53:36.972363   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:36.974989   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:36.975289   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:36.975355   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:36.975572   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:53:36.979264   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:53:36.991147   25213 kubeadm.go:883] updating cluster {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:53:36.991246   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:36.991285   25213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:53:37.026797   25213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 23:53:37.026870   25213 ssh_runner.go:195] Run: which lz4
	I0913 23:53:37.030818   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0913 23:53:37.030925   25213 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 23:53:37.034775   25213 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 23:53:37.034802   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 23:53:38.266803   25213 crio.go:462] duration metric: took 1.235912846s to copy over tarball
	I0913 23:53:38.266884   25213 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 23:53:40.230553   25213 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.963636138s)
	I0913 23:53:40.230586   25213 crio.go:469] duration metric: took 1.963756576s to extract the tarball
	I0913 23:53:40.230593   25213 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 23:53:40.265815   25213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:53:40.306468   25213 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 23:53:40.306488   25213 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:53:40.306495   25213 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.31.1 crio true true} ...
	I0913 23:53:40.306599   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:53:40.306662   25213 ssh_runner.go:195] Run: crio config
	I0913 23:53:40.351105   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:40.351125   25213 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 23:53:40.351134   25213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:53:40.351153   25213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-817269 NodeName:ha-817269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:53:40.351279   25213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-817269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:53:40.351300   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:53:40.351344   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:53:40.366350   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:53:40.366447   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:53:40.366496   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:53:40.375568   25213 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:53:40.375631   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 23:53:40.384270   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 23:53:40.399072   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:53:40.414108   25213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 23:53:40.428894   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0913 23:53:40.444102   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:53:40.447494   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:53:40.459630   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:53:40.592810   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:53:40.608621   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.132
	I0913 23:53:40.608648   25213 certs.go:194] generating shared ca certs ...
	I0913 23:53:40.608664   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:40.608849   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:53:40.608898   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:53:40.608914   25213 certs.go:256] generating profile certs ...
	I0913 23:53:40.608974   25213 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:53:40.608993   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt with IP's: []
	I0913 23:53:41.075182   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt ...
	I0913 23:53:41.075218   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt: {Name:mk37663b0bb79f3cd029e72ea8174a7a1a581895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.075407   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key ...
	I0913 23:53:41.075421   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key: {Name:mk3478584ca6bdcaa18e4b2b10357b0ee027b48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.075503   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b
	I0913 23:53:41.075523   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.254]
	I0913 23:53:41.146490   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b ...
	I0913 23:53:41.146522   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b: {Name:mkfe3a73348ddd87edbc5a6cabc554c4610640b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.146692   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b ...
	I0913 23:53:41.146706   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b: {Name:mk1cbe766e5f2a877a631cfb2d64d99e621e4f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.146783   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:53:41.146895   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:53:41.146959   25213 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:53:41.146977   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt with IP's: []
	I0913 23:53:41.216992   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt ...
	I0913 23:53:41.217023   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt: {Name:mk0de44fc0ae0c22325d0da288904b6579d9cf32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.217185   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key ...
	I0913 23:53:41.217197   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key: {Name:mkc555d16147bb2f803744ff0236a4697e3c2ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.217278   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:53:41.217298   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:53:41.217314   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:53:41.217331   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:53:41.217346   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:53:41.217361   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:53:41.217379   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:53:41.217393   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:53:41.217446   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:53:41.217487   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:53:41.217504   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:53:41.217533   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:53:41.217566   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:53:41.217591   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:53:41.217636   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:53:41.217677   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.217694   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.217708   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.218292   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:53:41.242148   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:53:41.263906   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:53:41.285294   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:53:41.307736   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 23:53:41.329593   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:53:41.354179   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:53:41.379968   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:53:41.406492   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:53:41.428642   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:53:41.450730   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:53:41.474234   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:53:41.489695   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:53:41.495118   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:53:41.505319   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.509425   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.509471   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.514802   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:53:41.524637   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:53:41.534301   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.538253   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.538295   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.543478   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:53:41.553327   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:53:41.562956   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.567018   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.567074   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.572317   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:53:41.582216   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:53:41.585940   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:53:41.585997   25213 kubeadm.go:392] StartCluster: {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:53:41.586064   25213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 23:53:41.586124   25213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 23:53:41.621256   25213 cri.go:89] found id: ""
	I0913 23:53:41.621326   25213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:53:41.630554   25213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:53:41.639193   25213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:53:41.647749   25213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:53:41.647766   25213 kubeadm.go:157] found existing configuration files:
	
	I0913 23:53:41.647812   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:53:41.656046   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:53:41.656099   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:53:41.664760   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:53:41.673925   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:53:41.673986   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:53:41.683704   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:53:41.693080   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:53:41.693153   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:53:41.702617   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:53:41.711400   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:53:41.711451   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:53:41.721031   25213 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:53:41.828442   25213 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:53:41.828728   25213 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:53:41.938049   25213 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:53:41.938168   25213 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:53:41.938296   25213 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:53:41.947216   25213 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:53:41.989200   25213 out.go:235]   - Generating certificates and keys ...
	I0913 23:53:41.989307   25213 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:53:41.989402   25213 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:53:42.288660   25213 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:53:42.544220   25213 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:53:42.813284   25213 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:53:42.949393   25213 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:53:43.132818   25213 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:53:43.133008   25213 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-817269 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0913 23:53:43.259724   25213 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:53:43.259961   25213 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-817269 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0913 23:53:43.610264   25213 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:53:43.726166   25213 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:53:43.940296   25213 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:53:43.940368   25213 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:53:44.076855   25213 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:53:44.294961   25213 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:53:44.360663   25213 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:53:44.488776   25213 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:53:44.595267   25213 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:53:44.595948   25213 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:53:44.599411   25213 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:53:44.640864   25213 out.go:235]   - Booting up control plane ...
	I0913 23:53:44.641031   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:53:44.641134   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:53:44.641222   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:53:44.641384   25213 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:53:44.641507   25213 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:53:44.641592   25213 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:53:44.757032   25213 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:53:44.757204   25213 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:53:45.759248   25213 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002499871s
	I0913 23:53:45.759374   25213 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:53:51.432813   25213 kubeadm.go:310] [api-check] The API server is healthy after 5.676702105s
	I0913 23:53:51.444631   25213 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:53:51.464639   25213 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:53:51.991895   25213 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:53:51.992115   25213 kubeadm.go:310] [mark-control-plane] Marking the node ha-817269 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:53:52.005034   25213 kubeadm.go:310] [bootstrap-token] Using token: cl4itr.u5psq9zksjfm5ip6
	I0913 23:53:52.006623   25213 out.go:235]   - Configuring RBAC rules ...
	I0913 23:53:52.006754   25213 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:53:52.017234   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:53:52.025927   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:53:52.029244   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:53:52.037816   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:53:52.041421   25213 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:53:52.057813   25213 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:53:52.295747   25213 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:53:52.839601   25213 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:53:52.840653   25213 kubeadm.go:310] 
	I0913 23:53:52.840747   25213 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:53:52.840758   25213 kubeadm.go:310] 
	I0913 23:53:52.840869   25213 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:53:52.840883   25213 kubeadm.go:310] 
	I0913 23:53:52.840919   25213 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:53:52.841006   25213 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:53:52.841061   25213 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:53:52.841069   25213 kubeadm.go:310] 
	I0913 23:53:52.841115   25213 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:53:52.841123   25213 kubeadm.go:310] 
	I0913 23:53:52.841160   25213 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:53:52.841167   25213 kubeadm.go:310] 
	I0913 23:53:52.841213   25213 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:53:52.841290   25213 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:53:52.841354   25213 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:53:52.841361   25213 kubeadm.go:310] 
	I0913 23:53:52.841430   25213 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:53:52.841502   25213 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:53:52.841509   25213 kubeadm.go:310] 
	I0913 23:53:52.841598   25213 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cl4itr.u5psq9zksjfm5ip6 \
	I0913 23:53:52.841747   25213 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0913 23:53:52.841785   25213 kubeadm.go:310] 	--control-plane 
	I0913 23:53:52.841791   25213 kubeadm.go:310] 
	I0913 23:53:52.841935   25213 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:53:52.841949   25213 kubeadm.go:310] 
	I0913 23:53:52.842025   25213 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cl4itr.u5psq9zksjfm5ip6 \
	I0913 23:53:52.842167   25213 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0913 23:53:52.843211   25213 kubeadm.go:310] W0913 23:53:41.808739     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:53:52.843530   25213 kubeadm.go:310] W0913 23:53:41.810468     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:53:52.843689   25213 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:53:52.843710   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:52.843717   25213 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 23:53:52.845718   25213 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0913 23:53:52.847496   25213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0913 23:53:52.852667   25213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0913 23:53:52.852690   25213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0913 23:53:52.872920   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0913 23:53:53.256393   25213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:53:53.256456   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:53.256509   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269 minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=true
	I0913 23:53:53.410594   25213 ops.go:34] apiserver oom_adj: -16
	I0913 23:53:53.410722   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:53.910931   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:54.411234   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:54.910919   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:55.410801   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:55.911353   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:56.410987   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:56.504995   25213 kubeadm.go:1113] duration metric: took 3.248599134s to wait for elevateKubeSystemPrivileges
	I0913 23:53:56.505051   25213 kubeadm.go:394] duration metric: took 14.919056274s to StartCluster
	I0913 23:53:56.505070   25213 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:56.505153   25213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:56.505950   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:56.506209   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:53:56.506204   25213 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:56.506233   25213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 23:53:56.506302   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:53:56.506314   25213 addons.go:69] Setting storage-provisioner=true in profile "ha-817269"
	I0913 23:53:56.506316   25213 addons.go:69] Setting default-storageclass=true in profile "ha-817269"
	I0913 23:53:56.506330   25213 addons.go:234] Setting addon storage-provisioner=true in "ha-817269"
	I0913 23:53:56.506331   25213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-817269"
	I0913 23:53:56.506356   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:53:56.506436   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:56.506766   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.506779   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.506811   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.506811   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.521882   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0913 23:53:56.521996   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0913 23:53:56.522334   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.522410   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.522846   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.522873   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.522872   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.522927   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.523287   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.523330   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.523511   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.523844   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.523886   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.525584   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:56.525917   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 23:53:56.526462   25213 cert_rotation.go:140] Starting client certificate rotation controller
	I0913 23:53:56.526745   25213 addons.go:234] Setting addon default-storageclass=true in "ha-817269"
	I0913 23:53:56.526790   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:53:56.527168   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.527216   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.539201   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0913 23:53:56.539644   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.540159   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.540183   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.540568   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.540791   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.541966   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0913 23:53:56.542386   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.542507   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:56.542857   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.542879   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.543221   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.543822   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.543869   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.544554   25213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:53:56.545739   25213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:53:56.545769   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:53:56.545792   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:56.549150   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.549676   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:56.549698   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.549859   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:56.550033   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:56.550171   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:56.550307   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:56.558876   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45083
	I0913 23:53:56.559374   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.559864   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.559884   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.560214   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.560418   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.562125   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:56.562374   25213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:53:56.562393   25213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:53:56.562410   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:56.565212   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.565614   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:56.565643   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.565871   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:56.566052   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:56.566201   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:56.566353   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:56.704825   25213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:53:56.707339   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:53:56.717097   25213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:53:57.609799   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.609820   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.609880   25213 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 23:53:57.609900   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.609909   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610103   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610121   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610131   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.610138   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610212   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610226   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610239   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.610246   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610406   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610425   25213 main.go:141] libmachine: (ha-817269) DBG | Closing plugin on server side
	I0913 23:53:57.610426   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610524   25213 main.go:141] libmachine: (ha-817269) DBG | Closing plugin on server side
	I0913 23:53:57.610558   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610580   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610667   25213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 23:53:57.610686   25213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 23:53:57.610764   25213 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0913 23:53:57.610769   25213 round_trippers.go:469] Request Headers:
	I0913 23:53:57.610777   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:53:57.610781   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:53:57.629053   25213 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0913 23:53:57.629634   25213 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0913 23:53:57.629652   25213 round_trippers.go:469] Request Headers:
	I0913 23:53:57.629662   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:53:57.629668   25213 round_trippers.go:473]     Content-Type: application/json
	I0913 23:53:57.629671   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:53:57.633924   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:53:57.634075   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.634086   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.634387   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.634405   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.637066   25213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0913 23:53:57.638474   25213 addons.go:510] duration metric: took 1.132245825s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0913 23:53:57.638515   25213 start.go:246] waiting for cluster config update ...
	I0913 23:53:57.638531   25213 start.go:255] writing updated cluster config ...
	I0913 23:53:57.640741   25213 out.go:201] 
	I0913 23:53:57.642452   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:57.642531   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:57.644339   25213 out.go:177] * Starting "ha-817269-m02" control-plane node in "ha-817269" cluster
	I0913 23:53:57.645864   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:57.645892   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:53:57.645992   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:53:57.646004   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:53:57.646069   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:57.646234   25213 start.go:360] acquireMachinesLock for ha-817269-m02: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:53:57.646282   25213 start.go:364] duration metric: took 25.679µs to acquireMachinesLock for "ha-817269-m02"
	I0913 23:53:57.646299   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:57.646359   25213 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0913 23:53:57.647723   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:53:57.647822   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:57.647859   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:57.662310   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0913 23:53:57.662772   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:57.663373   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:57.663401   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:57.663696   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:57.663905   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:53:57.664087   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:53:57.664297   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:53:57.664370   25213 client.go:168] LocalClient.Create starting
	I0913 23:53:57.664407   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:53:57.664452   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:57.664471   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:57.664626   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:53:57.664677   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:57.664695   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:57.664793   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:53:57.664820   25213 main.go:141] libmachine: (ha-817269-m02) Calling .PreCreateCheck
	I0913 23:53:57.665030   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:53:57.665580   25213 main.go:141] libmachine: Creating machine...
	I0913 23:53:57.665599   25213 main.go:141] libmachine: (ha-817269-m02) Calling .Create
	I0913 23:53:57.665753   25213 main.go:141] libmachine: (ha-817269-m02) Creating KVM machine...
	I0913 23:53:57.667798   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found existing default KVM network
	I0913 23:53:57.668051   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found existing private KVM network mk-ha-817269
	I0913 23:53:57.668203   25213 main.go:141] libmachine: (ha-817269-m02) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 ...
	I0913 23:53:57.668228   25213 main.go:141] libmachine: (ha-817269-m02) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:53:57.668330   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:57.668206   25578 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:57.668424   25213 main.go:141] libmachine: (ha-817269-m02) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:53:57.910101   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:57.909941   25578 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa...
	I0913 23:53:58.012058   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:58.011951   25578 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/ha-817269-m02.rawdisk...
	I0913 23:53:58.012088   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Writing magic tar header
	I0913 23:53:58.012097   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Writing SSH key tar header
	I0913 23:53:58.012106   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:58.012056   25578 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 ...
	I0913 23:53:58.012183   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02
	I0913 23:53:58.012209   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:53:58.012222   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 (perms=drwx------)
	I0913 23:53:58.012231   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:53:58.012240   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:53:58.012249   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:53:58.012255   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:53:58.012267   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:53:58.012276   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:58.012292   25213 main.go:141] libmachine: (ha-817269-m02) Creating domain...
	I0913 23:53:58.012301   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:53:58.012315   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:53:58.012323   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:53:58.012337   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home
	I0913 23:53:58.012345   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Skipping /home - not owner
	I0913 23:53:58.013245   25213 main.go:141] libmachine: (ha-817269-m02) define libvirt domain using xml: 
	I0913 23:53:58.013264   25213 main.go:141] libmachine: (ha-817269-m02) <domain type='kvm'>
	I0913 23:53:58.013280   25213 main.go:141] libmachine: (ha-817269-m02)   <name>ha-817269-m02</name>
	I0913 23:53:58.013287   25213 main.go:141] libmachine: (ha-817269-m02)   <memory unit='MiB'>2200</memory>
	I0913 23:53:58.013299   25213 main.go:141] libmachine: (ha-817269-m02)   <vcpu>2</vcpu>
	I0913 23:53:58.013306   25213 main.go:141] libmachine: (ha-817269-m02)   <features>
	I0913 23:53:58.013317   25213 main.go:141] libmachine: (ha-817269-m02)     <acpi/>
	I0913 23:53:58.013323   25213 main.go:141] libmachine: (ha-817269-m02)     <apic/>
	I0913 23:53:58.013333   25213 main.go:141] libmachine: (ha-817269-m02)     <pae/>
	I0913 23:53:58.013341   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013352   25213 main.go:141] libmachine: (ha-817269-m02)   </features>
	I0913 23:53:58.013362   25213 main.go:141] libmachine: (ha-817269-m02)   <cpu mode='host-passthrough'>
	I0913 23:53:58.013372   25213 main.go:141] libmachine: (ha-817269-m02)   
	I0913 23:53:58.013379   25213 main.go:141] libmachine: (ha-817269-m02)   </cpu>
	I0913 23:53:58.013386   25213 main.go:141] libmachine: (ha-817269-m02)   <os>
	I0913 23:53:58.013396   25213 main.go:141] libmachine: (ha-817269-m02)     <type>hvm</type>
	I0913 23:53:58.013404   25213 main.go:141] libmachine: (ha-817269-m02)     <boot dev='cdrom'/>
	I0913 23:53:58.013414   25213 main.go:141] libmachine: (ha-817269-m02)     <boot dev='hd'/>
	I0913 23:53:58.013422   25213 main.go:141] libmachine: (ha-817269-m02)     <bootmenu enable='no'/>
	I0913 23:53:58.013430   25213 main.go:141] libmachine: (ha-817269-m02)   </os>
	I0913 23:53:58.013437   25213 main.go:141] libmachine: (ha-817269-m02)   <devices>
	I0913 23:53:58.013447   25213 main.go:141] libmachine: (ha-817269-m02)     <disk type='file' device='cdrom'>
	I0913 23:53:58.013462   25213 main.go:141] libmachine: (ha-817269-m02)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/boot2docker.iso'/>
	I0913 23:53:58.013472   25213 main.go:141] libmachine: (ha-817269-m02)       <target dev='hdc' bus='scsi'/>
	I0913 23:53:58.013482   25213 main.go:141] libmachine: (ha-817269-m02)       <readonly/>
	I0913 23:53:58.013490   25213 main.go:141] libmachine: (ha-817269-m02)     </disk>
	I0913 23:53:58.013501   25213 main.go:141] libmachine: (ha-817269-m02)     <disk type='file' device='disk'>
	I0913 23:53:58.013514   25213 main.go:141] libmachine: (ha-817269-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:53:58.013526   25213 main.go:141] libmachine: (ha-817269-m02)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/ha-817269-m02.rawdisk'/>
	I0913 23:53:58.013537   25213 main.go:141] libmachine: (ha-817269-m02)       <target dev='hda' bus='virtio'/>
	I0913 23:53:58.013550   25213 main.go:141] libmachine: (ha-817269-m02)     </disk>
	I0913 23:53:58.013560   25213 main.go:141] libmachine: (ha-817269-m02)     <interface type='network'>
	I0913 23:53:58.013568   25213 main.go:141] libmachine: (ha-817269-m02)       <source network='mk-ha-817269'/>
	I0913 23:53:58.013575   25213 main.go:141] libmachine: (ha-817269-m02)       <model type='virtio'/>
	I0913 23:53:58.013584   25213 main.go:141] libmachine: (ha-817269-m02)     </interface>
	I0913 23:53:58.013591   25213 main.go:141] libmachine: (ha-817269-m02)     <interface type='network'>
	I0913 23:53:58.013602   25213 main.go:141] libmachine: (ha-817269-m02)       <source network='default'/>
	I0913 23:53:58.013612   25213 main.go:141] libmachine: (ha-817269-m02)       <model type='virtio'/>
	I0913 23:53:58.013619   25213 main.go:141] libmachine: (ha-817269-m02)     </interface>
	I0913 23:53:58.013629   25213 main.go:141] libmachine: (ha-817269-m02)     <serial type='pty'>
	I0913 23:53:58.013637   25213 main.go:141] libmachine: (ha-817269-m02)       <target port='0'/>
	I0913 23:53:58.013646   25213 main.go:141] libmachine: (ha-817269-m02)     </serial>
	I0913 23:53:58.013654   25213 main.go:141] libmachine: (ha-817269-m02)     <console type='pty'>
	I0913 23:53:58.013664   25213 main.go:141] libmachine: (ha-817269-m02)       <target type='serial' port='0'/>
	I0913 23:53:58.013674   25213 main.go:141] libmachine: (ha-817269-m02)     </console>
	I0913 23:53:58.013683   25213 main.go:141] libmachine: (ha-817269-m02)     <rng model='virtio'>
	I0913 23:53:58.013692   25213 main.go:141] libmachine: (ha-817269-m02)       <backend model='random'>/dev/random</backend>
	I0913 23:53:58.013701   25213 main.go:141] libmachine: (ha-817269-m02)     </rng>
	I0913 23:53:58.013708   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013717   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013724   25213 main.go:141] libmachine: (ha-817269-m02)   </devices>
	I0913 23:53:58.013733   25213 main.go:141] libmachine: (ha-817269-m02) </domain>
	I0913 23:53:58.013745   25213 main.go:141] libmachine: (ha-817269-m02) 
	I0913 23:53:58.020466   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:0a:ce:4e in network default
	I0913 23:53:58.021021   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring networks are active...
	I0913 23:53:58.021046   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:58.021779   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring network default is active
	I0913 23:53:58.022070   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring network mk-ha-817269 is active
	I0913 23:53:58.022524   25213 main.go:141] libmachine: (ha-817269-m02) Getting domain xml...
	I0913 23:53:58.023156   25213 main.go:141] libmachine: (ha-817269-m02) Creating domain...
	I0913 23:53:59.258990   25213 main.go:141] libmachine: (ha-817269-m02) Waiting to get IP...
	I0913 23:53:59.259884   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.260305   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.260339   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.260293   25578 retry.go:31] will retry after 252.903714ms: waiting for machine to come up
	I0913 23:53:59.514798   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.515250   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.515284   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.515196   25578 retry.go:31] will retry after 243.975614ms: waiting for machine to come up
	I0913 23:53:59.760450   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.760896   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.760920   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.760863   25578 retry.go:31] will retry after 446.918322ms: waiting for machine to come up
	I0913 23:54:00.209499   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:00.209959   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:00.209984   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:00.209913   25578 retry.go:31] will retry after 371.644867ms: waiting for machine to come up
	I0913 23:54:00.583498   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:00.584074   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:00.584102   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:00.584022   25578 retry.go:31] will retry after 602.57541ms: waiting for machine to come up
	I0913 23:54:01.187665   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:01.188097   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:01.188134   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:01.188003   25578 retry.go:31] will retry after 636.328676ms: waiting for machine to come up
	I0913 23:54:01.825787   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:01.826208   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:01.826235   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:01.826162   25578 retry.go:31] will retry after 935.123574ms: waiting for machine to come up
	I0913 23:54:02.763341   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:02.763849   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:02.763876   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:02.763807   25578 retry.go:31] will retry after 1.434666123s: waiting for machine to come up
	I0913 23:54:04.200402   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:04.200901   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:04.200933   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:04.200804   25578 retry.go:31] will retry after 1.248828258s: waiting for machine to come up
	I0913 23:54:05.451314   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:05.451700   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:05.451730   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:05.451613   25578 retry.go:31] will retry after 1.935798889s: waiting for machine to come up
	I0913 23:54:07.389918   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:07.390398   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:07.390427   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:07.390347   25578 retry.go:31] will retry after 2.345270301s: waiting for machine to come up
	I0913 23:54:09.737093   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:09.737524   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:09.737545   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:09.737480   25578 retry.go:31] will retry after 2.860762897s: waiting for machine to come up
	I0913 23:54:12.601730   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:12.602285   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:12.602311   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:12.602216   25578 retry.go:31] will retry after 4.41059942s: waiting for machine to come up
	I0913 23:54:17.017065   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:17.017467   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:17.017488   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:17.017432   25578 retry.go:31] will retry after 4.935665555s: waiting for machine to come up
	I0913 23:54:21.956937   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:21.957630   25213 main.go:141] libmachine: (ha-817269-m02) Found IP for machine: 192.168.39.6
	I0913 23:54:21.957662   25213 main.go:141] libmachine: (ha-817269-m02) Reserving static IP address...
	I0913 23:54:21.957676   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has current primary IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:21.958107   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find host DHCP lease matching {name: "ha-817269-m02", mac: "52:54:00:12:e8:40", ip: "192.168.39.6"} in network mk-ha-817269
	I0913 23:54:22.033248   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Getting to WaitForSSH function...
	I0913 23:54:22.033276   25213 main.go:141] libmachine: (ha-817269-m02) Reserved static IP address: 192.168.39.6
	I0913 23:54:22.033299   25213 main.go:141] libmachine: (ha-817269-m02) Waiting for SSH to be available...
	I0913 23:54:22.035657   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.036155   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.036187   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.036318   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using SSH client type: external
	I0913 23:54:22.036338   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa (-rw-------)
	I0913 23:54:22.036369   25213 main.go:141] libmachine: (ha-817269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:54:22.036380   25213 main.go:141] libmachine: (ha-817269-m02) DBG | About to run SSH command:
	I0913 23:54:22.036394   25213 main.go:141] libmachine: (ha-817269-m02) DBG | exit 0
	I0913 23:54:22.163961   25213 main.go:141] libmachine: (ha-817269-m02) DBG | SSH cmd err, output: <nil>: 
	I0913 23:54:22.164275   25213 main.go:141] libmachine: (ha-817269-m02) KVM machine creation complete!
	I0913 23:54:22.164640   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:54:22.165156   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:22.165321   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:22.165452   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:54:22.165463   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0913 23:54:22.166719   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:54:22.166735   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:54:22.166744   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:54:22.166752   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.169014   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.169359   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.169401   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.169718   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.169900   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.170017   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.170113   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.170283   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.170503   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.170531   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:54:22.291327   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:54:22.291350   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:54:22.291357   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.294488   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.294844   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.294872   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.295093   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.295321   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.295494   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.295631   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.295818   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.295995   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.296006   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:54:22.408849   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:54:22.408931   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:54:22.408945   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:54:22.408958   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.409228   25213 buildroot.go:166] provisioning hostname "ha-817269-m02"
	I0913 23:54:22.409257   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.409446   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.412134   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.412515   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.412543   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.412679   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.412850   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.413006   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.413149   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.413320   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.413505   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.413516   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269-m02 && echo "ha-817269-m02" | sudo tee /etc/hostname
	I0913 23:54:22.537581   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269-m02
	
	I0913 23:54:22.537610   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.540656   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.541295   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.541379   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.541682   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.541925   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.542136   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.542322   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.542488   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.542692   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.542711   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:54:22.664074   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:54:22.664106   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:54:22.664122   25213 buildroot.go:174] setting up certificates
	I0913 23:54:22.664132   25213 provision.go:84] configureAuth start
	I0913 23:54:22.664140   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.664402   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:22.667256   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.667697   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.667728   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.667924   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.670069   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.670397   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.670423   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.670585   25213 provision.go:143] copyHostCerts
	I0913 23:54:22.670622   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:54:22.670670   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:54:22.670683   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:54:22.670858   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:54:22.670970   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:54:22.670996   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:54:22.671006   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:54:22.671048   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:54:22.671108   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:54:22.671132   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:54:22.671141   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:54:22.671171   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:54:22.671233   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269-m02 san=[127.0.0.1 192.168.39.6 ha-817269-m02 localhost minikube]
	I0913 23:54:22.772722   25213 provision.go:177] copyRemoteCerts
	I0913 23:54:22.772797   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:54:22.772827   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.775563   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.775934   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.775959   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.776109   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.776280   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.776427   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.776581   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:22.862036   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:54:22.862119   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 23:54:22.886278   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:54:22.886364   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:54:22.910017   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:54:22.910086   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:54:22.933497   25213 provision.go:87] duration metric: took 269.353109ms to configureAuth
	I0913 23:54:22.933532   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:54:22.933737   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:22.933895   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.936636   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.936886   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.936917   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.937096   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.937292   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.937466   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.937637   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.937868   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.938039   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.938053   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:54:23.153804   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:54:23.153831   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:54:23.153844   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetURL
	I0913 23:54:23.155077   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using libvirt version 6000000
	I0913 23:54:23.157152   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.157475   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.157508   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.157651   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:54:23.157664   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:54:23.157670   25213 client.go:171] duration metric: took 25.493288714s to LocalClient.Create
	I0913 23:54:23.157695   25213 start.go:167] duration metric: took 25.493399423s to libmachine.API.Create "ha-817269"
	I0913 23:54:23.157704   25213 start.go:293] postStartSetup for "ha-817269-m02" (driver="kvm2")
	I0913 23:54:23.157714   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:54:23.157730   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.157948   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:54:23.157969   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.160140   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.160440   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.160463   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.160641   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.160816   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.160952   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.161080   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.245507   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:54:23.249278   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:54:23.249304   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:54:23.249379   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:54:23.249482   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:54:23.249494   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:54:23.249605   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:54:23.258693   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:54:23.281493   25213 start.go:296] duration metric: took 123.774542ms for postStartSetup
	I0913 23:54:23.281550   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:54:23.282117   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:23.284610   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.284952   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.284980   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.285220   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:54:23.285417   25213 start.go:128] duration metric: took 25.639046852s to createHost
	I0913 23:54:23.285439   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.287718   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.288121   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.288148   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.288297   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.288492   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.288656   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.288821   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.288951   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:23.289154   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:23.289165   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:54:23.400269   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271663.360120305
	
	I0913 23:54:23.400307   25213 fix.go:216] guest clock: 1726271663.360120305
	I0913 23:54:23.400320   25213 fix.go:229] Guest: 2024-09-13 23:54:23.360120305 +0000 UTC Remote: 2024-09-13 23:54:23.285428402 +0000 UTC m=+72.328645296 (delta=74.691903ms)
	I0913 23:54:23.400335   25213 fix.go:200] guest clock delta is within tolerance: 74.691903ms
	I0913 23:54:23.400341   25213 start.go:83] releasing machines lock for "ha-817269-m02", held for 25.754049851s
	I0913 23:54:23.400363   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.400609   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:23.403214   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.403547   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.403575   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.405930   25213 out.go:177] * Found network options:
	I0913 23:54:23.407210   25213 out.go:177]   - NO_PROXY=192.168.39.132
	W0913 23:54:23.408403   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:54:23.408430   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.408985   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.409163   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.409286   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:54:23.409330   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	W0913 23:54:23.409342   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:54:23.409408   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:54:23.409429   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.412238   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412263   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412647   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.412677   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412817   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.412821   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.412840   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.413006   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.413010   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.413163   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.413174   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.413307   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.413343   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.413501   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.645752   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:54:23.652154   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:54:23.652226   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:54:23.668085   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:54:23.668109   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:54:23.668162   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:54:23.683627   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:54:23.697419   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:54:23.697474   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:54:23.711521   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:54:23.725820   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:54:23.838265   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:54:23.994503   25213 docker.go:233] disabling docker service ...
	I0913 23:54:23.994584   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:54:24.008957   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:54:24.021851   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:54:24.157548   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:54:24.268397   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:54:24.281910   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:54:24.298933   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:54:24.298991   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.309300   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:54:24.309362   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.319549   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.329711   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.340063   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:54:24.350714   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.362073   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.378622   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.388538   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:54:24.398162   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:54:24.398216   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:54:24.411843   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:54:24.422163   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:24.538495   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:54:24.631278   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:54:24.631354   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:54:24.636266   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:54:24.636315   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:54:24.639869   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:54:24.679035   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:54:24.679104   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:54:24.710066   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:54:24.744990   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:54:24.746440   25213 out.go:177]   - env NO_PROXY=192.168.39.132
	I0913 23:54:24.747886   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:24.750572   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:24.750888   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:24.750913   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:24.751116   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:54:24.755119   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:54:24.767307   25213 mustload.go:65] Loading cluster: ha-817269
	I0913 23:54:24.767500   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:24.767733   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:24.767777   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:24.782693   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0913 23:54:24.783111   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:24.783584   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:24.783603   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:24.783942   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:24.784120   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:54:24.785645   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:54:24.785918   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:24.785950   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:24.801316   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0913 23:54:24.801721   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:24.802150   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:24.802172   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:24.802472   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:24.802667   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:54:24.802792   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.6
	I0913 23:54:24.802804   25213 certs.go:194] generating shared ca certs ...
	I0913 23:54:24.802821   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:24.802933   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:54:24.802970   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:54:24.802978   25213 certs.go:256] generating profile certs ...
	I0913 23:54:24.803050   25213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:54:24.803075   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df
	I0913 23:54:24.803088   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.254]
	I0913 23:54:25.167222   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df ...
	I0913 23:54:25.167258   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df: {Name:mk007159b7cd7eebf1ca7347528c8f29aa9b052c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:25.167418   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df ...
	I0913 23:54:25.167431   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df: {Name:mk268c398dae4c1095b1df23597f8dfb5196fe24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:25.167503   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:54:25.167636   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:54:25.167757   25213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:54:25.167771   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:54:25.167798   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:54:25.167814   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:54:25.167837   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:54:25.167854   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:54:25.167870   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:54:25.167890   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:54:25.167902   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:54:25.167949   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:54:25.167978   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:54:25.167986   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:54:25.168005   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:54:25.168026   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:54:25.168046   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:54:25.168080   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:54:25.168104   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.168120   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.168133   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.168160   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:54:25.171250   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:25.171670   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:54:25.171704   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:25.171843   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:54:25.172033   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:54:25.172176   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:54:25.172323   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:54:25.248199   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 23:54:25.252951   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 23:54:25.264893   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 23:54:25.268977   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 23:54:25.278801   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 23:54:25.282643   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 23:54:25.292306   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 23:54:25.296176   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 23:54:25.307298   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 23:54:25.311893   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 23:54:25.321687   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 23:54:25.325500   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 23:54:25.338339   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:54:25.362429   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:54:25.385639   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:54:25.410668   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:54:25.437559   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 23:54:25.462124   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:54:25.486142   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:54:25.511331   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:54:25.535128   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:54:25.561350   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:54:25.584473   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:54:25.609815   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 23:54:25.628611   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 23:54:25.646465   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 23:54:25.662249   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 23:54:25.679110   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 23:54:25.695420   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 23:54:25.711272   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 23:54:25.727712   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:54:25.733253   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:54:25.744025   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.748785   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.748856   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.756605   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:54:25.767726   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:54:25.779862   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.784596   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.784652   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.790148   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:54:25.800954   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:54:25.811901   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.816453   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.816505   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.821966   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:54:25.832543   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:54:25.836315   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:54:25.836366   25213 kubeadm.go:934] updating node {m02 192.168.39.6 8443 v1.31.1 crio true true} ...
	I0913 23:54:25.836443   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:54:25.836465   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:54:25.836503   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:54:25.850858   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:54:25.850926   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:54:25.850986   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:54:25.860284   25213 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 23:54:25.860351   25213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 23:54:25.869346   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 23:54:25.869373   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:54:25.869416   25213 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0913 23:54:25.869423   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:54:25.869446   25213 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0913 23:54:25.873299   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 23:54:25.873323   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 23:54:26.732942   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:54:26.733020   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:54:26.737557   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 23:54:26.737593   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 23:54:27.227538   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:54:27.242458   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:54:27.242558   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:54:27.246749   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 23:54:27.246785   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 23:54:27.541035   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 23:54:27.550667   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0913 23:54:27.568506   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:54:27.584943   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 23:54:27.601274   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:54:27.605117   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:54:27.618022   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:27.735335   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:54:27.752814   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:54:27.753143   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:27.753189   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:27.768338   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0913 23:54:27.768723   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:27.769220   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:27.769249   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:27.769632   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:27.769801   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:54:27.769919   25213 start.go:317] joinCluster: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:54:27.770019   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 23:54:27.770041   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:54:27.773192   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:27.773721   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:54:27.773753   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:27.773906   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:54:27.774083   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:54:27.774229   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:54:27.774356   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:54:27.922400   25213 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:54:27.922447   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 99eyjh.xvl4qb8rfpz08c9j --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m02 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443"
	I0913 23:54:48.977645   25213 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 99eyjh.xvl4qb8rfpz08c9j --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m02 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443": (21.055167995s)
	I0913 23:54:48.977687   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 23:54:49.509013   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269-m02 minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=false
	I0913 23:54:49.635486   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-817269-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 23:54:49.761125   25213 start.go:319] duration metric: took 21.991200254s to joinCluster
	I0913 23:54:49.761213   25213 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:54:49.761544   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:49.763048   25213 out.go:177] * Verifying Kubernetes components...
	I0913 23:54:49.764369   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:50.131147   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:54:50.186487   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:54:50.186844   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 23:54:50.186921   25213 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0913 23:54:50.187195   25213 node_ready.go:35] waiting up to 6m0s for node "ha-817269-m02" to be "Ready" ...
	I0913 23:54:50.187294   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:50.187304   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:50.187315   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:50.187322   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:50.199575   25213 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0913 23:54:50.687643   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:50.687668   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:50.687680   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:50.687686   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:50.692326   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:51.187962   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:51.187984   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:51.187995   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:51.188001   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:51.192539   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:51.687410   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:51.687432   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:51.687440   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:51.687445   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:51.721753   25213 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0913 23:54:52.187514   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:52.187537   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:52.187545   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:52.187548   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:52.190674   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:52.191187   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:52.688115   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:52.688142   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:52.688180   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:52.688189   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:52.693536   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:54:53.187969   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:53.187995   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:53.188007   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:53.188013   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:53.191362   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:53.688269   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:53.688299   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:53.688306   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:53.688309   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:53.693338   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:54:54.188343   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:54.188367   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:54.188379   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:54.188387   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:54.191692   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:54.192292   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:54.687625   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:54.687647   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:54.687657   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:54.687663   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:54.691817   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:55.187473   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:55.187501   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:55.187514   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:55.187520   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:55.191375   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:55.687378   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:55.687402   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:55.687409   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:55.687412   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:55.691066   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.187872   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:56.187894   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:56.187906   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:56.187910   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:56.191038   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.687548   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:56.687572   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:56.687580   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:56.687583   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:56.690699   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.691229   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:57.187650   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:57.187674   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:57.187683   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:57.187689   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:57.191958   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:57.688274   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:57.688298   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:57.688305   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:57.688309   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:57.691840   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:58.188310   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:58.188332   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:58.188340   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:58.188343   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:58.191660   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:58.687463   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:58.687485   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:58.687493   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:58.687497   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:58.690202   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:54:59.188173   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:59.188194   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:59.188201   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:59.188205   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:59.191631   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:59.192106   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:59.687479   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:59.687500   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:59.687508   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:59.687514   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:59.690851   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:00.187606   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:00.187628   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:00.187636   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:00.187640   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:00.190899   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:00.687871   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:00.687891   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:00.687900   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:00.687905   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:00.690961   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.187839   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:01.187863   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:01.187871   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:01.187874   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:01.191243   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.688094   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:01.688119   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:01.688129   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:01.688133   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:01.691175   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.691589   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:02.188070   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:02.188094   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:02.188102   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:02.188106   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:02.191108   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:02.688379   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:02.688401   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:02.688411   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:02.688417   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:02.691620   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.187906   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:03.187926   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:03.187934   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:03.187938   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:03.191160   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.688073   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:03.688096   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:03.688106   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:03.688110   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:03.691542   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.692067   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:04.187428   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:04.187455   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:04.187463   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:04.187467   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:04.190554   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:04.687487   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:04.687509   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:04.687518   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:04.687522   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:04.690777   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:05.187470   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:05.187492   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:05.187500   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:05.187504   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:05.190352   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:05.688410   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:05.688433   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:05.688440   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:05.688443   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:05.691726   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:05.692376   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:06.188212   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:06.188234   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:06.188242   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:06.188246   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:06.191702   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:06.688026   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:06.688048   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:06.688057   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:06.688060   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:06.691182   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.188091   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.188114   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.188125   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.188132   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.191400   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.191964   25213 node_ready.go:49] node "ha-817269-m02" has status "Ready":"True"
	I0913 23:55:07.191983   25213 node_ready.go:38] duration metric: took 17.004770061s for node "ha-817269-m02" to be "Ready" ...
	I0913 23:55:07.191992   25213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:55:07.192081   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:07.192090   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.192097   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.192100   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.196407   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:07.202154   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.202236   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mwpbw
	I0913 23:55:07.202247   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.202254   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.202260   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.205115   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.205745   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.205761   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.205770   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.205774   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.208101   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.208585   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.208605   25213 pod_ready.go:82] duration metric: took 6.423802ms for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.208613   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.208663   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rq5pv
	I0913 23:55:07.208671   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.208677   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.208682   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.210997   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.211658   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.211675   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.211685   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.211689   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.213873   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.214435   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.214454   25213 pod_ready.go:82] duration metric: took 5.834238ms for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.214465   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.214523   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269
	I0913 23:55:07.214534   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.214543   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.214552   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.216590   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.217270   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.217287   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.217296   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.217303   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.219492   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.220051   25213 pod_ready.go:93] pod "etcd-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.220070   25213 pod_ready.go:82] duration metric: took 5.597775ms for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.220080   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.220133   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m02
	I0913 23:55:07.220164   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.220176   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.220186   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.222394   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.222973   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.222986   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.222993   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.222998   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.225189   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.225755   25213 pod_ready.go:93] pod "etcd-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.225774   25213 pod_ready.go:82] duration metric: took 5.686118ms for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.225792   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.389193   25213 request.go:632] Waited for 163.333572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:55:07.389282   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:55:07.389290   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.389300   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.389306   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.394402   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:55:07.588463   25213 request.go:632] Waited for 193.3812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.588523   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.588541   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.588548   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.588551   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.591806   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.592411   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.592429   25213 pod_ready.go:82] duration metric: took 366.63076ms for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.592439   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.788573   25213 request.go:632] Waited for 196.073848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:55:07.788630   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:55:07.788635   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.788642   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.788646   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.791885   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.988954   25213 request.go:632] Waited for 196.353296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.989035   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.989041   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.989048   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.989053   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.992088   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.992531   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.992554   25213 pod_ready.go:82] duration metric: took 400.10971ms for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.992564   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.188658   25213 request.go:632] Waited for 196.03691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:55:08.188720   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:55:08.188725   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.188732   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.188737   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.192380   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.388478   25213 request.go:632] Waited for 195.353706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:08.388555   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:08.388566   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.388576   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.388581   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.391860   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.392400   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:08.392421   25213 pod_ready.go:82] duration metric: took 399.850459ms for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.392431   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.588414   25213 request.go:632] Waited for 195.896935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:55:08.588589   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:55:08.588601   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.588609   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.588613   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.591801   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.788876   25213 request.go:632] Waited for 196.380536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:08.788939   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:08.788945   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.788956   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.788960   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.792396   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.793090   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:08.793111   25213 pod_ready.go:82] duration metric: took 400.671065ms for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.793120   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.989113   25213 request.go:632] Waited for 195.909002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:55:08.989169   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:55:08.989174   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.989181   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.989185   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.992406   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.188278   25213 request.go:632] Waited for 195.302069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:09.188371   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:09.188377   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.188384   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.188389   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.191646   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.192377   25213 pod_ready.go:93] pod "kube-proxy-7t9b2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.192399   25213 pod_ready.go:82] duration metric: took 399.27203ms for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.192411   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.388390   25213 request.go:632] Waited for 195.903787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:55:09.388439   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:55:09.388444   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.388451   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.388454   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.392179   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.588159   25213 request.go:632] Waited for 195.286849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.588215   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.588220   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.588227   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.588230   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.591515   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.592016   25213 pod_ready.go:93] pod "kube-proxy-p9lkl" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.592036   25213 pod_ready.go:82] duration metric: took 399.617448ms for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.592048   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.789123   25213 request.go:632] Waited for 196.975871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:55:09.789205   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:55:09.789210   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.789218   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.789222   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.802921   25213 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0913 23:55:09.989137   25213 request.go:632] Waited for 185.633841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.989231   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.989242   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.989261   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.989271   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.992888   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.993351   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.993369   25213 pod_ready.go:82] duration metric: took 401.314204ms for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.993382   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:10.188501   25213 request.go:632] Waited for 195.041759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:55:10.188585   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:55:10.188590   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.188597   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.188601   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.191888   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:10.388757   25213 request.go:632] Waited for 196.346001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:10.388807   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:10.388812   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.388820   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.388840   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.396204   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:55:10.396959   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:10.396988   25213 pod_ready.go:82] duration metric: took 403.599221ms for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:10.397003   25213 pod_ready.go:39] duration metric: took 3.204979529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:55:10.397022   25213 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:55:10.397088   25213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:55:10.413715   25213 api_server.go:72] duration metric: took 20.652464406s to wait for apiserver process to appear ...
	I0913 23:55:10.413752   25213 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:55:10.413777   25213 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0913 23:55:10.419955   25213 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0913 23:55:10.420058   25213 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0913 23:55:10.420069   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.420095   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.420105   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.421090   25213 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 23:55:10.421199   25213 api_server.go:141] control plane version: v1.31.1
	I0913 23:55:10.421218   25213 api_server.go:131] duration metric: took 7.458574ms to wait for apiserver health ...
	I0913 23:55:10.421225   25213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:55:10.588679   25213 request.go:632] Waited for 167.354613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.588742   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.588749   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.588760   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.588765   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.593508   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:10.600058   25213 system_pods.go:59] 17 kube-system pods found
	I0913 23:55:10.600090   25213 system_pods.go:61] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:55:10.600096   25213 system_pods.go:61] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:55:10.600100   25213 system_pods.go:61] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:55:10.600103   25213 system_pods.go:61] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:55:10.600107   25213 system_pods.go:61] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:55:10.600110   25213 system_pods.go:61] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:55:10.600113   25213 system_pods.go:61] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:55:10.600116   25213 system_pods.go:61] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:55:10.600120   25213 system_pods.go:61] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:55:10.600124   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:55:10.600127   25213 system_pods.go:61] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:55:10.600131   25213 system_pods.go:61] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:55:10.600136   25213 system_pods.go:61] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:55:10.600139   25213 system_pods.go:61] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:55:10.600142   25213 system_pods.go:61] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:55:10.600145   25213 system_pods.go:61] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:55:10.600148   25213 system_pods.go:61] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:55:10.600153   25213 system_pods.go:74] duration metric: took 178.923004ms to wait for pod list to return data ...
	I0913 23:55:10.600162   25213 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:55:10.788631   25213 request.go:632] Waited for 188.399764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:55:10.788695   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:55:10.788702   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.788712   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.788717   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.792847   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:10.793128   25213 default_sa.go:45] found service account: "default"
	I0913 23:55:10.793152   25213 default_sa.go:55] duration metric: took 192.982758ms for default service account to be created ...
	I0913 23:55:10.793162   25213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:55:10.988315   25213 request.go:632] Waited for 195.055947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.988389   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.988397   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.988407   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.988413   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.994679   25213 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 23:55:11.000285   25213 system_pods.go:86] 17 kube-system pods found
	I0913 23:55:11.000316   25213 system_pods.go:89] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:55:11.000322   25213 system_pods.go:89] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:55:11.000326   25213 system_pods.go:89] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:55:11.000330   25213 system_pods.go:89] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:55:11.000333   25213 system_pods.go:89] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:55:11.000337   25213 system_pods.go:89] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:55:11.000341   25213 system_pods.go:89] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:55:11.000346   25213 system_pods.go:89] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:55:11.000352   25213 system_pods.go:89] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:55:11.000358   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:55:11.000366   25213 system_pods.go:89] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:55:11.000371   25213 system_pods.go:89] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:55:11.000379   25213 system_pods.go:89] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:55:11.000384   25213 system_pods.go:89] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:55:11.000387   25213 system_pods.go:89] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:55:11.000390   25213 system_pods.go:89] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:55:11.000393   25213 system_pods.go:89] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:55:11.000399   25213 system_pods.go:126] duration metric: took 207.230473ms to wait for k8s-apps to be running ...
	I0913 23:55:11.000408   25213 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:55:11.000450   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:55:11.015127   25213 system_svc.go:56] duration metric: took 14.707803ms WaitForService to wait for kubelet
	I0913 23:55:11.015160   25213 kubeadm.go:582] duration metric: took 21.253914529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:55:11.015180   25213 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:55:11.188954   25213 request.go:632] Waited for 173.69537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0913 23:55:11.189014   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0913 23:55:11.189020   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:11.189027   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:11.189030   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:11.192671   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:11.193541   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:55:11.193579   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:55:11.193591   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:55:11.193594   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:55:11.193599   25213 node_conditions.go:105] duration metric: took 178.414001ms to run NodePressure ...
	I0913 23:55:11.193609   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:55:11.193631   25213 start.go:255] writing updated cluster config ...
	I0913 23:55:11.196301   25213 out.go:201] 
	I0913 23:55:11.198620   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:11.198761   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:11.200318   25213 out.go:177] * Starting "ha-817269-m03" control-plane node in "ha-817269" cluster
	I0913 23:55:11.201674   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:55:11.201713   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:55:11.201816   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:55:11.201827   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:55:11.201935   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:11.202132   25213 start.go:360] acquireMachinesLock for ha-817269-m03: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:55:11.202178   25213 start.go:364] duration metric: took 26.572µs to acquireMachinesLock for "ha-817269-m03"
	I0913 23:55:11.202195   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:55:11.202318   25213 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0913 23:55:11.203728   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:55:11.203850   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:11.203887   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:11.218764   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I0913 23:55:11.219183   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:11.219676   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:11.219700   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:11.220096   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:11.220290   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:11.220405   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:11.220551   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:55:11.220579   25213 client.go:168] LocalClient.Create starting
	I0913 23:55:11.220610   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:55:11.220649   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:55:11.220665   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:55:11.220727   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:55:11.220751   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:55:11.220770   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:55:11.220794   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:55:11.220804   25213 main.go:141] libmachine: (ha-817269-m03) Calling .PreCreateCheck
	I0913 23:55:11.220943   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:11.221365   25213 main.go:141] libmachine: Creating machine...
	I0913 23:55:11.221382   25213 main.go:141] libmachine: (ha-817269-m03) Calling .Create
	I0913 23:55:11.221507   25213 main.go:141] libmachine: (ha-817269-m03) Creating KVM machine...
	I0913 23:55:11.222693   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found existing default KVM network
	I0913 23:55:11.222906   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found existing private KVM network mk-ha-817269
	I0913 23:55:11.223033   25213 main.go:141] libmachine: (ha-817269-m03) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 ...
	I0913 23:55:11.223075   25213 main.go:141] libmachine: (ha-817269-m03) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:55:11.223152   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.223048   25987 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:55:11.223248   25213 main.go:141] libmachine: (ha-817269-m03) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:55:11.452469   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.452313   25987 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa...
	I0913 23:55:11.621065   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.620963   25987 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/ha-817269-m03.rawdisk...
	I0913 23:55:11.621096   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Writing magic tar header
	I0913 23:55:11.621109   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Writing SSH key tar header
	I0913 23:55:11.621119   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.621083   25987 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 ...
	I0913 23:55:11.621172   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03
	I0913 23:55:11.621213   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 (perms=drwx------)
	I0913 23:55:11.621243   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:55:11.621257   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:55:11.621270   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:55:11.621284   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:55:11.621299   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:55:11.621307   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:55:11.621319   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:55:11.621329   25213 main.go:141] libmachine: (ha-817269-m03) Creating domain...
	I0913 23:55:11.621344   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:55:11.621355   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:55:11.621365   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:55:11.621370   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home
	I0913 23:55:11.621377   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Skipping /home - not owner
	I0913 23:55:11.622208   25213 main.go:141] libmachine: (ha-817269-m03) define libvirt domain using xml: 
	I0913 23:55:11.622227   25213 main.go:141] libmachine: (ha-817269-m03) <domain type='kvm'>
	I0913 23:55:11.622236   25213 main.go:141] libmachine: (ha-817269-m03)   <name>ha-817269-m03</name>
	I0913 23:55:11.622248   25213 main.go:141] libmachine: (ha-817269-m03)   <memory unit='MiB'>2200</memory>
	I0913 23:55:11.622255   25213 main.go:141] libmachine: (ha-817269-m03)   <vcpu>2</vcpu>
	I0913 23:55:11.622262   25213 main.go:141] libmachine: (ha-817269-m03)   <features>
	I0913 23:55:11.622279   25213 main.go:141] libmachine: (ha-817269-m03)     <acpi/>
	I0913 23:55:11.622286   25213 main.go:141] libmachine: (ha-817269-m03)     <apic/>
	I0913 23:55:11.622295   25213 main.go:141] libmachine: (ha-817269-m03)     <pae/>
	I0913 23:55:11.622301   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622309   25213 main.go:141] libmachine: (ha-817269-m03)   </features>
	I0913 23:55:11.622316   25213 main.go:141] libmachine: (ha-817269-m03)   <cpu mode='host-passthrough'>
	I0913 23:55:11.622336   25213 main.go:141] libmachine: (ha-817269-m03)   
	I0913 23:55:11.622357   25213 main.go:141] libmachine: (ha-817269-m03)   </cpu>
	I0913 23:55:11.622399   25213 main.go:141] libmachine: (ha-817269-m03)   <os>
	I0913 23:55:11.622416   25213 main.go:141] libmachine: (ha-817269-m03)     <type>hvm</type>
	I0913 23:55:11.622430   25213 main.go:141] libmachine: (ha-817269-m03)     <boot dev='cdrom'/>
	I0913 23:55:11.622444   25213 main.go:141] libmachine: (ha-817269-m03)     <boot dev='hd'/>
	I0913 23:55:11.622457   25213 main.go:141] libmachine: (ha-817269-m03)     <bootmenu enable='no'/>
	I0913 23:55:11.622467   25213 main.go:141] libmachine: (ha-817269-m03)   </os>
	I0913 23:55:11.622476   25213 main.go:141] libmachine: (ha-817269-m03)   <devices>
	I0913 23:55:11.622486   25213 main.go:141] libmachine: (ha-817269-m03)     <disk type='file' device='cdrom'>
	I0913 23:55:11.622512   25213 main.go:141] libmachine: (ha-817269-m03)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/boot2docker.iso'/>
	I0913 23:55:11.622527   25213 main.go:141] libmachine: (ha-817269-m03)       <target dev='hdc' bus='scsi'/>
	I0913 23:55:11.622555   25213 main.go:141] libmachine: (ha-817269-m03)       <readonly/>
	I0913 23:55:11.622564   25213 main.go:141] libmachine: (ha-817269-m03)     </disk>
	I0913 23:55:11.622585   25213 main.go:141] libmachine: (ha-817269-m03)     <disk type='file' device='disk'>
	I0913 23:55:11.622601   25213 main.go:141] libmachine: (ha-817269-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:55:11.622617   25213 main.go:141] libmachine: (ha-817269-m03)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/ha-817269-m03.rawdisk'/>
	I0913 23:55:11.622628   25213 main.go:141] libmachine: (ha-817269-m03)       <target dev='hda' bus='virtio'/>
	I0913 23:55:11.622640   25213 main.go:141] libmachine: (ha-817269-m03)     </disk>
	I0913 23:55:11.622650   25213 main.go:141] libmachine: (ha-817269-m03)     <interface type='network'>
	I0913 23:55:11.622662   25213 main.go:141] libmachine: (ha-817269-m03)       <source network='mk-ha-817269'/>
	I0913 23:55:11.622676   25213 main.go:141] libmachine: (ha-817269-m03)       <model type='virtio'/>
	I0913 23:55:11.622686   25213 main.go:141] libmachine: (ha-817269-m03)     </interface>
	I0913 23:55:11.622694   25213 main.go:141] libmachine: (ha-817269-m03)     <interface type='network'>
	I0913 23:55:11.622707   25213 main.go:141] libmachine: (ha-817269-m03)       <source network='default'/>
	I0913 23:55:11.622717   25213 main.go:141] libmachine: (ha-817269-m03)       <model type='virtio'/>
	I0913 23:55:11.622728   25213 main.go:141] libmachine: (ha-817269-m03)     </interface>
	I0913 23:55:11.622738   25213 main.go:141] libmachine: (ha-817269-m03)     <serial type='pty'>
	I0913 23:55:11.622763   25213 main.go:141] libmachine: (ha-817269-m03)       <target port='0'/>
	I0913 23:55:11.622784   25213 main.go:141] libmachine: (ha-817269-m03)     </serial>
	I0913 23:55:11.622797   25213 main.go:141] libmachine: (ha-817269-m03)     <console type='pty'>
	I0913 23:55:11.622808   25213 main.go:141] libmachine: (ha-817269-m03)       <target type='serial' port='0'/>
	I0913 23:55:11.622818   25213 main.go:141] libmachine: (ha-817269-m03)     </console>
	I0913 23:55:11.622827   25213 main.go:141] libmachine: (ha-817269-m03)     <rng model='virtio'>
	I0913 23:55:11.622840   25213 main.go:141] libmachine: (ha-817269-m03)       <backend model='random'>/dev/random</backend>
	I0913 23:55:11.622849   25213 main.go:141] libmachine: (ha-817269-m03)     </rng>
	I0913 23:55:11.622873   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622891   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622905   25213 main.go:141] libmachine: (ha-817269-m03)   </devices>
	I0913 23:55:11.622920   25213 main.go:141] libmachine: (ha-817269-m03) </domain>
	I0913 23:55:11.622934   25213 main.go:141] libmachine: (ha-817269-m03) 
	I0913 23:55:11.629334   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:84:b2:76 in network default
	I0913 23:55:11.630117   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring networks are active...
	I0913 23:55:11.630138   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:11.631149   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring network default is active
	I0913 23:55:11.631540   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring network mk-ha-817269 is active
	I0913 23:55:11.631902   25213 main.go:141] libmachine: (ha-817269-m03) Getting domain xml...
	I0913 23:55:11.632697   25213 main.go:141] libmachine: (ha-817269-m03) Creating domain...
	I0913 23:55:12.885338   25213 main.go:141] libmachine: (ha-817269-m03) Waiting to get IP...
	I0913 23:55:12.886050   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:12.886515   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:12.886576   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:12.886511   25987 retry.go:31] will retry after 211.035695ms: waiting for machine to come up
	I0913 23:55:13.099147   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.099717   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.099749   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.099656   25987 retry.go:31] will retry after 388.168891ms: waiting for machine to come up
	I0913 23:55:13.489393   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.489932   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.489960   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.489868   25987 retry.go:31] will retry after 357.451576ms: waiting for machine to come up
	I0913 23:55:13.849615   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.850201   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.850231   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.850128   25987 retry.go:31] will retry after 521.54606ms: waiting for machine to come up
	I0913 23:55:14.373576   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:14.374080   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:14.374110   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:14.374036   25987 retry.go:31] will retry after 627.057001ms: waiting for machine to come up
	I0913 23:55:15.002951   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:15.003486   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:15.003519   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:15.003440   25987 retry.go:31] will retry after 836.491577ms: waiting for machine to come up
	I0913 23:55:15.842251   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:15.842854   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:15.842973   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:15.842795   25987 retry.go:31] will retry after 722.977468ms: waiting for machine to come up
	I0913 23:55:16.566838   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:16.567174   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:16.567193   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:16.567153   25987 retry.go:31] will retry after 1.232147704s: waiting for machine to come up
	I0913 23:55:17.801545   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:17.802055   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:17.802083   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:17.801996   25987 retry.go:31] will retry after 1.803928933s: waiting for machine to come up
	I0913 23:55:19.607646   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:19.608127   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:19.608163   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:19.608067   25987 retry.go:31] will retry after 1.861415984s: waiting for machine to come up
	I0913 23:55:21.470570   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:21.471074   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:21.471105   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:21.471011   25987 retry.go:31] will retry after 2.818653272s: waiting for machine to come up
	I0913 23:55:24.292810   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:24.293254   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:24.293280   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:24.293213   25987 retry.go:31] will retry after 3.152954921s: waiting for machine to come up
	I0913 23:55:27.448595   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:27.449217   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:27.449240   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:27.449126   25987 retry.go:31] will retry after 3.308883019s: waiting for machine to come up
	I0913 23:55:30.761625   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:30.762119   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:30.762141   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:30.762080   25987 retry.go:31] will retry after 3.90905092s: waiting for machine to come up
	I0913 23:55:34.675349   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.675970   25213 main.go:141] libmachine: (ha-817269-m03) Found IP for machine: 192.168.39.68
	I0913 23:55:34.676002   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has current primary IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.676010   25213 main.go:141] libmachine: (ha-817269-m03) Reserving static IP address...
	I0913 23:55:34.676443   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find host DHCP lease matching {name: "ha-817269-m03", mac: "52:54:00:61:13:06", ip: "192.168.39.68"} in network mk-ha-817269
	I0913 23:55:34.769785   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Getting to WaitForSSH function...
	I0913 23:55:34.769817   25213 main.go:141] libmachine: (ha-817269-m03) Reserved static IP address: 192.168.39.68
	I0913 23:55:34.769831   25213 main.go:141] libmachine: (ha-817269-m03) Waiting for SSH to be available...
	I0913 23:55:34.775622   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.776439   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269
	I0913 23:55:34.776479   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find defined IP address of network mk-ha-817269 interface with MAC address 52:54:00:61:13:06
	I0913 23:55:34.776708   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH client type: external
	I0913 23:55:34.776735   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa (-rw-------)
	I0913 23:55:34.776825   25213 main.go:141] libmachine: (ha-817269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:55:34.776857   25213 main.go:141] libmachine: (ha-817269-m03) DBG | About to run SSH command:
	I0913 23:55:34.776871   25213 main.go:141] libmachine: (ha-817269-m03) DBG | exit 0
	I0913 23:55:34.781306   25213 main.go:141] libmachine: (ha-817269-m03) DBG | SSH cmd err, output: exit status 255: 
	I0913 23:55:34.781345   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 23:55:34.781360   25213 main.go:141] libmachine: (ha-817269-m03) DBG | command : exit 0
	I0913 23:55:34.781372   25213 main.go:141] libmachine: (ha-817269-m03) DBG | err     : exit status 255
	I0913 23:55:34.781384   25213 main.go:141] libmachine: (ha-817269-m03) DBG | output  : 
	I0913 23:55:37.782710   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Getting to WaitForSSH function...
	I0913 23:55:37.785389   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.785839   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:37.785869   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.785948   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH client type: external
	I0913 23:55:37.785986   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa (-rw-------)
	I0913 23:55:37.786020   25213 main.go:141] libmachine: (ha-817269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:55:37.786036   25213 main.go:141] libmachine: (ha-817269-m03) DBG | About to run SSH command:
	I0913 23:55:37.786048   25213 main.go:141] libmachine: (ha-817269-m03) DBG | exit 0
	I0913 23:55:37.915763   25213 main.go:141] libmachine: (ha-817269-m03) DBG | SSH cmd err, output: <nil>: 
	I0913 23:55:37.916036   25213 main.go:141] libmachine: (ha-817269-m03) KVM machine creation complete!
	I0913 23:55:37.916415   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:37.916905   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:37.917087   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:37.917268   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:55:37.917281   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0913 23:55:37.918589   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:55:37.918604   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:55:37.918612   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:55:37.918619   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:37.920683   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.921030   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:37.921057   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.921338   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:37.921503   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:37.921654   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:37.921766   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:37.921912   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:37.922154   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:37.922167   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:55:38.031557   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:55:38.031586   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:55:38.031596   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.036277   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.036740   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.036769   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.037074   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.037301   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.037606   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.037796   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.038049   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.038206   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.038216   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:55:38.148130   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:55:38.148201   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:55:38.148214   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:55:38.148223   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.148465   25213 buildroot.go:166] provisioning hostname "ha-817269-m03"
	I0913 23:55:38.148501   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.148678   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.151235   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.151575   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.151600   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.151727   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.151899   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.152076   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.152210   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.152370   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.152575   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.152586   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269-m03 && echo "ha-817269-m03" | sudo tee /etc/hostname
	I0913 23:55:38.278480   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269-m03
	
	I0913 23:55:38.278510   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.281122   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.281471   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.281511   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.281738   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.281907   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.282050   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.282161   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.282293   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.282451   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.282467   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:55:38.400641   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:55:38.400677   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:55:38.400699   25213 buildroot.go:174] setting up certificates
	I0913 23:55:38.400709   25213 provision.go:84] configureAuth start
	I0913 23:55:38.400721   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.401032   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:38.403609   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.403981   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.404002   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.404189   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.406061   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.406400   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.406442   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.406562   25213 provision.go:143] copyHostCerts
	I0913 23:55:38.406592   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:55:38.406633   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:55:38.406646   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:55:38.406730   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:55:38.406838   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:55:38.406871   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:55:38.406880   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:55:38.406922   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:55:38.407004   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:55:38.407029   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:55:38.407038   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:55:38.407076   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:55:38.407157   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269-m03 san=[127.0.0.1 192.168.39.68 ha-817269-m03 localhost minikube]
	I0913 23:55:38.545052   25213 provision.go:177] copyRemoteCerts
	I0913 23:55:38.545118   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:55:38.545149   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.548022   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.548345   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.548374   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.548530   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.548691   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.548816   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.548921   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:38.634530   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:55:38.634612   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:55:38.658715   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:55:38.658796   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:55:38.683540   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:55:38.683602   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:55:38.710001   25213 provision.go:87] duration metric: took 309.277958ms to configureAuth
	I0913 23:55:38.710030   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:55:38.710267   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:38.710353   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.713112   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.713542   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.713571   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.713691   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.713871   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.714037   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.714151   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.714301   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.714452   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.714464   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:55:38.934725   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:55:38.934751   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:55:38.934759   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetURL
	I0913 23:55:38.936292   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using libvirt version 6000000
	I0913 23:55:38.938608   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.938961   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.938987   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.939166   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:55:38.939186   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:55:38.939193   25213 client.go:171] duration metric: took 27.718607432s to LocalClient.Create
	I0913 23:55:38.939218   25213 start.go:167] duration metric: took 27.718669613s to libmachine.API.Create "ha-817269"
	I0913 23:55:38.939231   25213 start.go:293] postStartSetup for "ha-817269-m03" (driver="kvm2")
	I0913 23:55:38.939243   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:55:38.939265   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:38.939552   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:55:38.939572   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.941660   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.942028   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.942051   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.942268   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.942449   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.942604   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.942708   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.027301   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:55:39.031737   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:55:39.031768   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:55:39.031854   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:55:39.031944   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:55:39.031958   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:55:39.032065   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:55:39.041881   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:55:39.066826   25213 start.go:296] duration metric: took 127.580682ms for postStartSetup
	I0913 23:55:39.066888   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:39.067543   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:39.070333   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.070878   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.070918   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.071273   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:39.071507   25213 start.go:128] duration metric: took 27.869178264s to createHost
	I0913 23:55:39.071535   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:39.073969   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.074394   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.074421   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.074589   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.074788   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.074927   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.075046   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.075189   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:39.075409   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:39.075424   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:55:39.184310   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271739.166205196
	
	I0913 23:55:39.184335   25213 fix.go:216] guest clock: 1726271739.166205196
	I0913 23:55:39.184343   25213 fix.go:229] Guest: 2024-09-13 23:55:39.166205196 +0000 UTC Remote: 2024-09-13 23:55:39.07151977 +0000 UTC m=+148.114736673 (delta=94.685426ms)
	I0913 23:55:39.184358   25213 fix.go:200] guest clock delta is within tolerance: 94.685426ms
	I0913 23:55:39.184365   25213 start.go:83] releasing machines lock for "ha-817269-m03", held for 27.982177413s
	I0913 23:55:39.184388   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.184673   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:39.187546   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.187968   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.187993   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.190368   25213 out.go:177] * Found network options:
	I0913 23:55:39.191781   25213 out.go:177]   - NO_PROXY=192.168.39.132,192.168.39.6
	W0913 23:55:39.192966   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 23:55:39.192994   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:55:39.193015   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193603   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193787   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193862   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:55:39.193908   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	W0913 23:55:39.193976   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 23:55:39.194010   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:55:39.194083   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:55:39.194104   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:39.196854   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197126   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197332   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.197364   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197535   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.197593   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.197617   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197693   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.197770   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.197835   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.197901   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.197994   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.197994   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.198151   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.437719   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:55:39.443613   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:55:39.443689   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:55:39.459332   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:55:39.459363   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:55:39.459460   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:55:39.476630   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:55:39.490488   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:55:39.490557   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:55:39.504494   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:55:39.517473   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:55:39.626063   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:55:39.780927   25213 docker.go:233] disabling docker service ...
	I0913 23:55:39.781009   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:55:39.796182   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:55:39.811125   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:55:39.942539   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:55:40.073069   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:55:40.088262   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:55:40.106653   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:55:40.106723   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.116597   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:55:40.116661   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.126249   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.136027   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.147405   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:55:40.158939   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.170015   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.186803   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.196896   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:55:40.205832   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:55:40.205891   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:55:40.218759   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:55:40.227617   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:55:40.355751   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:55:40.454384   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:55:40.454455   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:55:40.459832   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:55:40.459907   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:55:40.463809   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:55:40.503536   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:55:40.503626   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:55:40.530448   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:55:40.559290   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:55:40.560767   25213 out.go:177]   - env NO_PROXY=192.168.39.132
	I0913 23:55:40.562083   25213 out.go:177]   - env NO_PROXY=192.168.39.132,192.168.39.6
	I0913 23:55:40.563716   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:40.566613   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:40.566935   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:40.566960   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:40.567188   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:55:40.571410   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:55:40.583511   25213 mustload.go:65] Loading cluster: ha-817269
	I0913 23:55:40.583744   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:40.584024   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:40.584063   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:40.600039   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I0913 23:55:40.600465   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:40.600930   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:40.600952   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:40.601284   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:40.601492   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:55:40.603219   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:55:40.603501   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:40.603556   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:40.618991   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0913 23:55:40.619430   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:40.620021   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:40.620043   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:40.620349   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:40.620505   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:55:40.620651   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.68
	I0913 23:55:40.620661   25213 certs.go:194] generating shared ca certs ...
	I0913 23:55:40.620674   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.620787   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:55:40.620825   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:55:40.620834   25213 certs.go:256] generating profile certs ...
	I0913 23:55:40.620900   25213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:55:40.620923   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c
	I0913 23:55:40.620937   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.68 192.168.39.254]
	I0913 23:55:40.830651   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c ...
	I0913 23:55:40.830684   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c: {Name:mk8d9024110bfeb203b6e91f0e321306ad905077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.830883   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c ...
	I0913 23:55:40.830902   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c: {Name:mk34f5bcfc1f2ed41966070859698727dcacea18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.831174   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:55:40.831382   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:55:40.831584   25213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:55:40.831601   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:55:40.831614   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:55:40.831624   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:55:40.831642   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:55:40.831656   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:55:40.831675   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:55:40.831748   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:55:40.843975   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:55:40.844071   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:55:40.844125   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:55:40.844140   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:55:40.844169   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:55:40.844205   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:55:40.844234   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:55:40.844289   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:55:40.844327   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:55:40.844348   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:55:40.844365   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:40.844412   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:55:40.847079   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:40.847635   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:55:40.847664   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:40.847873   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:55:40.848067   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:55:40.848231   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:55:40.848393   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:55:40.924186   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 23:55:40.929403   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 23:55:40.941602   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 23:55:40.946504   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 23:55:40.961116   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 23:55:40.967162   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 23:55:40.979653   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 23:55:40.984703   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 23:55:40.999184   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 23:55:41.009470   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 23:55:41.023915   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 23:55:41.029256   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 23:55:41.041387   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:55:41.066439   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:55:41.093525   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:55:41.120996   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:55:41.144983   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0913 23:55:41.168361   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:55:41.196357   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:55:41.219491   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:55:41.241960   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:55:41.265413   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:55:41.289840   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:55:41.315154   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 23:55:41.331886   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 23:55:41.350688   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 23:55:41.369557   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 23:55:41.386519   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 23:55:41.402677   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 23:55:41.421240   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 23:55:41.439499   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:55:41.445412   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:55:41.456446   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.461070   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.461133   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.467115   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:55:41.478381   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:55:41.489389   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.494212   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.494273   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.499662   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:55:41.510112   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:55:41.520338   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.524729   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.524790   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.529996   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:55:41.540659   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:55:41.544673   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:55:41.544722   25213 kubeadm.go:934] updating node {m03 192.168.39.68 8443 v1.31.1 crio true true} ...
	I0913 23:55:41.544802   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:55:41.544835   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:55:41.544873   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:55:41.562996   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:55:41.563080   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:55:41.563143   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:55:41.573436   25213 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 23:55:41.573508   25213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 23:55:41.582907   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 23:55:41.582953   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 23:55:41.582978   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:55:41.582956   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:55:41.583044   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:55:41.582997   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 23:55:41.583086   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:55:41.583149   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:55:41.587398   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 23:55:41.587427   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 23:55:41.626330   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:55:41.626331   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 23:55:41.626404   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 23:55:41.626448   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:55:41.662177   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 23:55:41.662209   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 23:55:42.532205   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 23:55:42.542442   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:55:42.565043   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:55:42.583282   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 23:55:42.600855   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:55:42.606296   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:55:42.620005   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:55:42.757672   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:55:42.780453   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:55:42.780941   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:42.780995   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:42.796895   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0913 23:55:42.797413   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:42.797966   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:42.797992   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:42.798351   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:42.798658   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:55:42.798828   25213 start.go:317] joinCluster: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:55:42.798981   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 23:55:42.798997   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:55:42.802191   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:42.802836   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:55:42.802876   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:42.803187   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:55:42.803393   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:55:42.803545   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:55:42.803740   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:55:42.971542   25213 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:55:42.971616   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gwzhzn.g3aaqj2b0yiq46n6 --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m03 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0913 23:56:05.142372   25213 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gwzhzn.g3aaqj2b0yiq46n6 --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m03 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (22.170717167s)
	I0913 23:56:05.142458   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 23:56:05.674909   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269-m03 minikube.k8s.io/updated_at=2024_09_13T23_56_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=false
	I0913 23:56:05.801046   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-817269-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 23:56:05.913224   25213 start.go:319] duration metric: took 23.11439217s to joinCluster
	I0913 23:56:05.913327   25213 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:56:05.913665   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:56:05.915500   25213 out.go:177] * Verifying Kubernetes components...
	I0913 23:56:05.917249   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:56:06.263931   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:56:06.296340   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:56:06.296627   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 23:56:06.296685   25213 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0913 23:56:06.296925   25213 node_ready.go:35] waiting up to 6m0s for node "ha-817269-m03" to be "Ready" ...
	I0913 23:56:06.297004   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:06.297015   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:06.297026   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:06.297037   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:06.302158   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:06.797370   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:06.797406   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:06.797416   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:06.797421   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:06.801509   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:07.297993   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:07.298018   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:07.298028   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:07.298034   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:07.305273   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:56:07.797521   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:07.797550   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:07.797562   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:07.797623   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:07.801648   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:08.297396   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:08.297416   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:08.297427   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:08.297432   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:08.301194   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:08.301681   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:08.798082   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:08.798104   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:08.798113   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:08.798116   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:08.801950   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:09.297895   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:09.297918   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:09.297928   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:09.297935   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:09.301967   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:09.797541   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:09.797564   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:09.797585   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:09.797591   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:09.801453   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:10.297951   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:10.298002   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:10.298015   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:10.298021   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:10.301801   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:10.302565   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:10.797472   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:10.797498   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:10.797509   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:10.797516   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:10.804790   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:56:11.298129   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:11.298156   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:11.298168   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:11.298173   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:11.304102   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:11.797183   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:11.797210   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:11.797222   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:11.797229   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:11.800566   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:12.297496   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:12.297520   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:12.297543   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:12.297550   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:12.303217   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:12.303762   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:12.798011   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:12.798033   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:12.798042   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:12.798046   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:12.801544   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:13.297811   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:13.297840   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:13.297851   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:13.297856   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:13.301925   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:13.797462   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:13.797487   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:13.797497   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:13.797504   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:13.803000   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:14.297500   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:14.297524   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:14.297533   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:14.297540   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:14.300969   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:14.797844   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:14.797866   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:14.797874   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:14.797878   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:14.801296   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:14.801808   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:15.298067   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:15.298092   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:15.298103   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:15.298108   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:15.302364   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:15.798086   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:15.798110   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:15.798121   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:15.798128   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:15.801671   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.297899   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:16.297923   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:16.297933   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:16.297941   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:16.301733   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.797903   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:16.797924   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:16.797930   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:16.797934   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:16.801188   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.801889   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:17.297931   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:17.297958   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:17.297969   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:17.297974   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:17.301670   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:17.797324   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:17.797344   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:17.797352   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:17.797356   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:17.800407   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:18.297761   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:18.297783   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:18.297791   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:18.297795   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:18.301408   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:18.797218   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:18.797241   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:18.797251   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:18.797256   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:18.800549   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:19.297907   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:19.297943   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:19.297955   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:19.297961   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:19.302619   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:19.303156   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:19.797514   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:19.797545   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:19.797554   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:19.797559   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:19.801908   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:20.297956   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:20.297980   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:20.297988   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:20.297992   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:20.301530   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:20.797290   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:20.797315   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:20.797323   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:20.797329   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:20.800694   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.297897   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:21.297922   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:21.297932   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:21.297937   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:21.301206   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.797072   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:21.797093   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:21.797100   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:21.797104   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:21.800800   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.801249   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:22.297565   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:22.297594   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:22.297605   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:22.297612   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:22.300844   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:22.797793   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:22.797813   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:22.797821   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:22.797825   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:22.801592   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.298061   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:23.298089   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.298097   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.298101   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.301909   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.797760   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:23.797785   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.797795   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.797812   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.801140   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.801695   25213 node_ready.go:49] node "ha-817269-m03" has status "Ready":"True"
	I0913 23:56:23.801716   25213 node_ready.go:38] duration metric: took 17.504775301s for node "ha-817269-m03" to be "Ready" ...
	I0913 23:56:23.801723   25213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:56:23.801842   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:23.801857   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.801867   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.801873   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.807883   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:23.813808   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.813882   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mwpbw
	I0913 23:56:23.813891   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.813898   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.813902   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.816951   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.817512   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.817528   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.817535   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.817539   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.820127   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.820758   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.820776   25213 pod_ready.go:82] duration metric: took 6.945529ms for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.820785   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.820833   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rq5pv
	I0913 23:56:23.820840   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.820847   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.820854   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.823433   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.824054   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.824067   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.824074   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.824078   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.826288   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.826781   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.826795   25213 pod_ready.go:82] duration metric: took 6.004504ms for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.826803   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.826849   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269
	I0913 23:56:23.826856   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.826862   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.826866   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.829007   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.829506   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.829518   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.829524   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.829528   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.831794   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.832529   25213 pod_ready.go:93] pod "etcd-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.832547   25213 pod_ready.go:82] duration metric: took 5.737477ms for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.832558   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.832617   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m02
	I0913 23:56:23.832627   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.832636   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.832643   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.835171   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.835846   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:23.835861   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.835870   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.835877   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.838476   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.839058   25213 pod_ready.go:93] pod "etcd-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.839074   25213 pod_ready.go:82] duration metric: took 6.509005ms for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.839082   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.998534   25213 request.go:632] Waited for 159.393284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m03
	I0913 23:56:23.998602   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m03
	I0913 23:56:23.998610   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.998621   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.998684   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.002242   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.198547   25213 request.go:632] Waited for 195.406667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:24.198647   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:24.198656   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.198668   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.198678   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.202087   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.202576   25213 pod_ready.go:93] pod "etcd-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:24.202594   25213 pod_ready.go:82] duration metric: took 363.505982ms for pod "etcd-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.202618   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.398825   25213 request.go:632] Waited for 196.129506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:56:24.398907   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:56:24.398918   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.398928   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.398936   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.402563   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.598546   25213 request.go:632] Waited for 195.348374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:24.598598   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:24.598602   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.598612   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.598619   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.601840   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.602399   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:24.602425   25213 pod_ready.go:82] duration metric: took 399.795885ms for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.602439   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.798583   25213 request.go:632] Waited for 196.054862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:56:24.798653   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:56:24.798662   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.798673   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.798683   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.802927   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:24.998623   25213 request.go:632] Waited for 194.658729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:24.998687   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:24.998694   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.998705   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.998710   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.002493   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.003098   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.003126   25213 pod_ready.go:82] duration metric: took 400.679484ms for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.003137   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.198324   25213 request.go:632] Waited for 195.110224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m03
	I0913 23:56:25.198399   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m03
	I0913 23:56:25.198405   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.198413   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.198420   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.202304   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.398755   25213 request.go:632] Waited for 195.370574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:25.398822   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:25.398843   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.398852   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.398859   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.403809   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:25.404315   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.404346   25213 pod_ready.go:82] duration metric: took 401.203093ms for pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.404360   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.598435   25213 request.go:632] Waited for 193.996636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:56:25.598511   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:56:25.598518   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.598528   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.598537   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.602490   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.798212   25213 request.go:632] Waited for 194.91139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:25.798292   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:25.798299   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.798316   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.798325   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.802071   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.802525   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.802542   25213 pod_ready.go:82] duration metric: took 398.175427ms for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.802552   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.997820   25213 request.go:632] Waited for 195.20112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:56:25.997900   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:56:25.997912   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.997923   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.997929   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.002077   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:26.198089   25213 request.go:632] Waited for 195.190135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.198184   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.198196   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.198221   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.198226   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.201626   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:26.202138   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:26.202158   25213 pod_ready.go:82] duration metric: took 399.597741ms for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.202169   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.398681   25213 request.go:632] Waited for 196.449711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m03
	I0913 23:56:26.398743   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m03
	I0913 23:56:26.398750   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.398759   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.398769   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.402887   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:26.598749   25213 request.go:632] Waited for 195.194054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:26.598809   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:26.598813   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.598820   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.598825   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.602277   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:26.602742   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:26.602760   25213 pod_ready.go:82] duration metric: took 400.584781ms for pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.602777   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.797953   25213 request.go:632] Waited for 195.085414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:56:26.798138   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:56:26.798156   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.798167   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.798175   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.874051   25213 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0913 23:56:26.998492   25213 request.go:632] Waited for 123.27371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.998588   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.998598   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.998605   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.998608   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.002582   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.003238   25213 pod_ready.go:93] pod "kube-proxy-7t9b2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.003259   25213 pod_ready.go:82] duration metric: took 400.472179ms for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.003269   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwr6g" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.198305   25213 request.go:632] Waited for 194.97488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bwr6g
	I0913 23:56:27.198364   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bwr6g
	I0913 23:56:27.198381   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.198391   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.198396   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.201758   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.398783   25213 request.go:632] Waited for 196.370557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:27.398856   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:27.398863   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.398870   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.398873   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.402245   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.402848   25213 pod_ready.go:93] pod "kube-proxy-bwr6g" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.402873   25213 pod_ready.go:82] duration metric: took 399.594924ms for pod "kube-proxy-bwr6g" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.402887   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.597861   25213 request.go:632] Waited for 194.878811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:56:27.597933   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:56:27.597941   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.597950   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.597959   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.601252   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.797936   25213 request.go:632] Waited for 196.027185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:27.798005   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:27.798011   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.798021   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.798027   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.801636   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.802301   25213 pod_ready.go:93] pod "kube-proxy-p9lkl" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.802323   25213 pod_ready.go:82] duration metric: took 399.427432ms for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.802335   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.998667   25213 request.go:632] Waited for 196.261463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:56:27.998757   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:56:27.998765   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.998780   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.998789   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.002402   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.198469   25213 request.go:632] Waited for 195.365117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:28.198543   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:28.198548   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.198567   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.198575   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.201614   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.202191   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:28.202209   25213 pod_ready.go:82] duration metric: took 399.86721ms for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.202219   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.398302   25213 request.go:632] Waited for 196.02284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:56:28.398364   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:56:28.398374   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.398383   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.398400   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.401753   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.598735   25213 request.go:632] Waited for 196.352003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:28.598804   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:28.598809   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.598816   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.598820   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.603244   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:28.603711   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:28.603735   25213 pod_ready.go:82] duration metric: took 401.50969ms for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.603747   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.797891   25213 request.go:632] Waited for 194.053174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m03
	I0913 23:56:28.797948   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m03
	I0913 23:56:28.797954   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.797961   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.797964   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.802684   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:28.998666   25213 request.go:632] Waited for 195.361149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:28.998746   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:28.998755   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.998763   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.998767   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.002267   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:29.002892   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:29.002908   25213 pod_ready.go:82] duration metric: took 399.155646ms for pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:29.002919   25213 pod_ready.go:39] duration metric: took 5.20118564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:56:29.002932   25213 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:56:29.002982   25213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:56:29.019020   25213 api_server.go:72] duration metric: took 23.105654077s to wait for apiserver process to appear ...
	I0913 23:56:29.019048   25213 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:56:29.019071   25213 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0913 23:56:29.023793   25213 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0913 23:56:29.023865   25213 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0913 23:56:29.023871   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.023878   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.023886   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.024911   25213 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0913 23:56:29.024992   25213 api_server.go:141] control plane version: v1.31.1
	I0913 23:56:29.025004   25213 api_server.go:131] duration metric: took 5.949292ms to wait for apiserver health ...
	I0913 23:56:29.025017   25213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:56:29.198483   25213 request.go:632] Waited for 173.392668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.198563   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.198569   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.198577   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.198581   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.204562   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:29.212258   25213 system_pods.go:59] 24 kube-system pods found
	I0913 23:56:29.212292   25213 system_pods.go:61] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:56:29.212297   25213 system_pods.go:61] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:56:29.212301   25213 system_pods.go:61] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:56:29.212305   25213 system_pods.go:61] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:56:29.212313   25213 system_pods.go:61] "etcd-ha-817269-m03" [d9e93af2-0a01-46eb-8ccd-09b9f3bb8976] Running
	I0913 23:56:29.212317   25213 system_pods.go:61] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:56:29.212320   25213 system_pods.go:61] "kindnet-np2s8" [97c0d537-4460-47f7-8248-1e9445ac27bd] Running
	I0913 23:56:29.212323   25213 system_pods.go:61] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:56:29.212326   25213 system_pods.go:61] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:56:29.212330   25213 system_pods.go:61] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:56:29.212333   25213 system_pods.go:61] "kube-apiserver-ha-817269-m03" [58c8463c-880c-4e4a-b4f8-1460801fab06] Running
	I0913 23:56:29.212337   25213 system_pods.go:61] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:56:29.212340   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:56:29.212345   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m03" [aa8cf8e9-cafe-46cc-aa22-3c188fd160fc] Running
	I0913 23:56:29.212350   25213 system_pods.go:61] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:56:29.212354   25213 system_pods.go:61] "kube-proxy-bwr6g" [256835a2-a848-4572-9e9f-e99350c07ed2] Running
	I0913 23:56:29.212358   25213 system_pods.go:61] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:56:29.212363   25213 system_pods.go:61] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:56:29.212368   25213 system_pods.go:61] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:56:29.212373   25213 system_pods.go:61] "kube-scheduler-ha-817269-m03" [2dd97d6a-9b14-41e2-bf07-628073272e6d] Running
	I0913 23:56:29.212381   25213 system_pods.go:61] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:56:29.212387   25213 system_pods.go:61] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:56:29.212395   25213 system_pods.go:61] "kube-vip-ha-817269-m03" [e50f8baf-d5d0-4534-b1ce-eb76b23764f7] Running
	I0913 23:56:29.212401   25213 system_pods.go:61] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:56:29.212408   25213 system_pods.go:74] duration metric: took 187.384291ms to wait for pod list to return data ...
	I0913 23:56:29.212419   25213 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:56:29.397869   25213 request.go:632] Waited for 185.3661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:56:29.397927   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:56:29.397932   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.397939   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.397944   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.402445   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:29.402564   25213 default_sa.go:45] found service account: "default"
	I0913 23:56:29.402580   25213 default_sa.go:55] duration metric: took 190.156097ms for default service account to be created ...
	I0913 23:56:29.402589   25213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:56:29.597876   25213 request.go:632] Waited for 195.226759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.597941   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.597949   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.597959   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.597965   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.604837   25213 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 23:56:29.610983   25213 system_pods.go:86] 24 kube-system pods found
	I0913 23:56:29.611013   25213 system_pods.go:89] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:56:29.611019   25213 system_pods.go:89] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:56:29.611023   25213 system_pods.go:89] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:56:29.611027   25213 system_pods.go:89] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:56:29.611031   25213 system_pods.go:89] "etcd-ha-817269-m03" [d9e93af2-0a01-46eb-8ccd-09b9f3bb8976] Running
	I0913 23:56:29.611035   25213 system_pods.go:89] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:56:29.611038   25213 system_pods.go:89] "kindnet-np2s8" [97c0d537-4460-47f7-8248-1e9445ac27bd] Running
	I0913 23:56:29.611042   25213 system_pods.go:89] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:56:29.611046   25213 system_pods.go:89] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:56:29.611052   25213 system_pods.go:89] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:56:29.611056   25213 system_pods.go:89] "kube-apiserver-ha-817269-m03" [58c8463c-880c-4e4a-b4f8-1460801fab06] Running
	I0913 23:56:29.611062   25213 system_pods.go:89] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:56:29.611065   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:56:29.611069   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m03" [aa8cf8e9-cafe-46cc-aa22-3c188fd160fc] Running
	I0913 23:56:29.611073   25213 system_pods.go:89] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:56:29.611076   25213 system_pods.go:89] "kube-proxy-bwr6g" [256835a2-a848-4572-9e9f-e99350c07ed2] Running
	I0913 23:56:29.611080   25213 system_pods.go:89] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:56:29.611084   25213 system_pods.go:89] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:56:29.611091   25213 system_pods.go:89] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:56:29.611095   25213 system_pods.go:89] "kube-scheduler-ha-817269-m03" [2dd97d6a-9b14-41e2-bf07-628073272e6d] Running
	I0913 23:56:29.611099   25213 system_pods.go:89] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:56:29.611136   25213 system_pods.go:89] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:56:29.611146   25213 system_pods.go:89] "kube-vip-ha-817269-m03" [e50f8baf-d5d0-4534-b1ce-eb76b23764f7] Running
	I0913 23:56:29.611150   25213 system_pods.go:89] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:56:29.611156   25213 system_pods.go:126] duration metric: took 208.562026ms to wait for k8s-apps to be running ...
	I0913 23:56:29.611165   25213 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:56:29.611210   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:56:29.626851   25213 system_svc.go:56] duration metric: took 15.678046ms WaitForService to wait for kubelet
	I0913 23:56:29.626887   25213 kubeadm.go:582] duration metric: took 23.713525989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:56:29.626909   25213 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:56:29.798234   25213 request.go:632] Waited for 171.245269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0913 23:56:29.798313   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0913 23:56:29.798319   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.798326   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.798332   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.803161   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:29.804631   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804654   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804664   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804667   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804670   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804673   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804677   25213 node_conditions.go:105] duration metric: took 177.763156ms to run NodePressure ...
	I0913 23:56:29.804687   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:56:29.804704   25213 start.go:255] writing updated cluster config ...
	I0913 23:56:29.804974   25213 ssh_runner.go:195] Run: rm -f paused
	I0913 23:56:29.859662   25213 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:56:29.861836   25213 out.go:177] * Done! kubectl is now configured to use "ha-817269" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.529848863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272010529827312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b0aed92-a859-470f-ada6-742bd162f27d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.530360096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3109383b-c0e4-4b5a-abfa-e8ebd5ccc349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.530430037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3109383b-c0e4-4b5a-abfa-e8ebd5ccc349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.530669353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3109383b-c0e4-4b5a-abfa-e8ebd5ccc349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.567705599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d87835a8-396d-4b46-b5b7-809e85825eb3 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.567800990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d87835a8-396d-4b46-b5b7-809e85825eb3 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.569600208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c25919f4-2a01-4784-a794-e59cc291db5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.570027148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272010569998252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c25919f4-2a01-4784-a794-e59cc291db5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.570677978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdde632d-03e9-40f4-8672-7dd87df04d8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.570831816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdde632d-03e9-40f4-8672-7dd87df04d8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.571870811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdde632d-03e9-40f4-8672-7dd87df04d8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.614742013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36cd7f86-e87f-42e6-aee0-84ff45d13af9 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.614837545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36cd7f86-e87f-42e6-aee0-84ff45d13af9 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.616211410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04916b78-bf0c-4755-95ec-d80b9e05088e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.616755165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272010616729836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04916b78-bf0c-4755-95ec-d80b9e05088e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.617240562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f150174b-a475-48f0-b2f9-cf242af010a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.617307370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f150174b-a475-48f0-b2f9-cf242af010a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.617533614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f150174b-a475-48f0-b2f9-cf242af010a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.657733332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef39a079-0cec-4952-bbb1-d5526c974be7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.657804111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef39a079-0cec-4952-bbb1-d5526c974be7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.658867412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5e18dc9-886f-42a9-a66e-411d743647e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.659334840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272010659312902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5e18dc9-886f-42a9-a66e-411d743647e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.659967766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f25ce03a-ca6b-4654-9c05-e40810fb87ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.660019558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f25ce03a-ca6b-4654-9c05-e40810fb87ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:00:10 ha-817269 crio[664]: time="2024-09-14 00:00:10.660301907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f25ce03a-ca6b-4654-9c05-e40810fb87ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c3d244ad4c30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2ff13b6745379       busybox-7dff88458-5cbmn
	61abb6eb65e46       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   36c20ca07db88       coredns-7c65d6cfc9-rq5pv
	4ce76346be5b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8b315def4f628       coredns-7c65d6cfc9-mwpbw
	315adcde5c56f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4d00c26a02801       storage-provisioner
	b992c3b895609       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   f453fe4fb77a3       kindnet-dxj2g
	f8f2322f127fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   babdf5981ec86       kube-proxy-p9lkl
	2faad36b3b9a3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   7f935f0bca02a       kube-vip-ha-817269
	45371c7b7dce4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   eccbef0ef4d20       kube-scheduler-ha-817269
	33ac2ce16b58b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   0ea1c016c25f7       etcd-ha-817269
	a72c7ed6fd0b9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   07ce99ad32595       kube-controller-manager-ha-817269
	11c2a11c941f9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9f7ec2e6fa8fd       kube-apiserver-ha-817269
	
	
	==> coredns [4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94] <==
	[INFO] 10.244.0.4:55927 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000399545s
	[INFO] 10.244.0.4:49919 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003496351s
	[INFO] 10.244.0.4:46401 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250317s
	[INFO] 10.244.0.4:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278308s
	[INFO] 10.244.2.2:47599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000288668s
	[INFO] 10.244.2.2:53222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165792s
	[INFO] 10.244.2.2:51300 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207369s
	[INFO] 10.244.2.2:56912 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110533s
	[INFO] 10.244.2.2:37804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204459s
	[INFO] 10.244.1.2:54436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226539s
	[INFO] 10.244.1.2:56082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001819826s
	[INFO] 10.244.1.2:58316 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222276s
	[INFO] 10.244.1.2:42306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083319s
	[INFO] 10.244.0.4:53876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020116s
	[INFO] 10.244.0.4:56768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013293s
	[INFO] 10.244.0.4:47653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.0.4:50365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154019s
	[INFO] 10.244.2.2:56862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195398s
	[INFO] 10.244.2.2:40784 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189124s
	[INFO] 10.244.2.2:42797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106937s
	[INFO] 10.244.1.2:49876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246067s
	[INFO] 10.244.0.4:44026 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299901s
	[INFO] 10.244.0.4:40123 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000233032s
	[INFO] 10.244.1.2:42204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000500811s
	[INFO] 10.244.1.2:44587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205062s
	
	
	==> coredns [61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997] <==
	[INFO] 10.244.1.2:46173 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000159167s
	[INFO] 10.244.1.2:57795 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000608214s
	[INFO] 10.244.0.4:58344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020088s
	[INFO] 10.244.0.4:39998 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005418524s
	[INFO] 10.244.0.4:57052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284617s
	[INFO] 10.244.0.4:59585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149604s
	[INFO] 10.244.2.2:44013 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0019193s
	[INFO] 10.244.2.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022048s
	[INFO] 10.244.2.2:33172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513908s
	[INFO] 10.244.1.2:35965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790224s
	[INFO] 10.244.1.2:42555 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000321828s
	[INFO] 10.244.1.2:54761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123494s
	[INFO] 10.244.1.2:51742 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176208s
	[INFO] 10.244.2.2:55439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172115s
	[INFO] 10.244.1.2:32823 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000209293s
	[INFO] 10.244.1.2:54911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191869s
	[INFO] 10.244.1.2:45538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090559s
	[INFO] 10.244.0.4:51099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293009s
	[INFO] 10.244.0.4:52402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204563s
	[INFO] 10.244.2.2:48710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318957s
	[INFO] 10.244.2.2:51855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124089s
	[INFO] 10.244.2.2:54763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000257295s
	[INFO] 10.244.2.2:56836 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186617s
	[INFO] 10.244.1.2:45824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223466s
	[INFO] 10.244.1.2:32974 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143816s
	
	
	==> describe nodes <==
	Name:               ha-817269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:53:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:00:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:54:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-817269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc026746bcc47d49a7f508137c16c0a
	  System UUID:                0bc02674-6bcc-47d4-9a7f-508137c16c0a
	  Boot ID:                    1a383d96-7a2a-4a67-94ca-0f262bc14568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5cbmn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-mwpbw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-7c65d6cfc9-rq5pv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-817269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-dxj2g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m14s
	  kube-system                 kube-apiserver-ha-817269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-817269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-p9lkl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-scheduler-ha-817269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-817269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m13s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-817269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-817269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-817269 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal  NodeReady                6m2s   kubelet          Node ha-817269 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	
	
	Name:               ha-817269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:54:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:57:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-817269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 260fc9ca7fe3421fbf6de250d4218230
	  System UUID:                260fc9ca-7fe3-421f-bf6d-e250d4218230
	  Boot ID:                    5829ad79-34f1-4783-8856-f43f06d412e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wff9f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-817269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-qcfqk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-817269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-817269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-proxy-7t9b2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-817269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-vip-ha-817269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-817269-m02 status is now: NodeNotReady
	
	
	Name:               ha-817269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_56_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:56:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:00:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-817269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cd9a23c8c734501a4ad2e1089d5fd49
	  System UUID:                7cd9a23c-8c73-4501-a4ad-2e1089d5fd49
	  Boot ID:                    85dd8157-d1db-4702-87e2-60247276cb9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vsts4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-817269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-np2s8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-817269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-817269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-bwr6g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-817269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-817269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 4m4s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m10s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m10s)  kubelet          Node ha-817269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m10s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m7s                  node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal  RegisteredNode           4m6s                  node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal  RegisteredNode           4m1s                  node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	
	
	Name:               ha-817269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_57_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:57:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-817269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5153efd89a4042b8870c772e8476a0
	  System UUID:                ca5153ef-d89a-4042-b887-0c772e8476a0
	  Boot ID:                    5a56c1c7-47d1-459d-93e6-f87cc04e73b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-45h44       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-b8pch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-817269-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep13 23:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051672] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037846] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798368] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951398] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.553139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.741095] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.067049] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057264] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.183859] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.112834] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.262666] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.811628] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.142511] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066169] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.376137] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.080016] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.054020] kauditd_printk_skb: 26 callbacks suppressed
	[Sep13 23:54] kauditd_printk_skb: 35 callbacks suppressed
	[ +43.648430] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a] <==
	{"level":"warn","ts":"2024-09-14T00:00:10.923364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.927746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.939443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.949538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.956666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.960214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.963602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.970795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.976670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.982263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.985947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.988846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.994983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:10.999622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.000641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.009142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.009841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.012697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.014054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.017370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.017949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.023171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.037149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.044049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:00:11.051675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:00:11 up 6 min,  0 users,  load average: 0.32, 0.24, 0.10
	Linux ha-817269 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e] <==
	I0913 23:59:38.542529       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0913 23:59:48.542302       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0913 23:59:48.542342       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0913 23:59:48.542516       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0913 23:59:48.542550       1 main.go:299] handling current node
	I0913 23:59:48.542567       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0913 23:59:48.542574       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0913 23:59:48.542657       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0913 23:59:48.542681       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0913 23:59:58.534473       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0913 23:59:58.534524       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0913 23:59:58.534657       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0913 23:59:58.534677       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0913 23:59:58.534726       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0913 23:59:58.534742       1 main.go:299] handling current node
	I0913 23:59:58.534759       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0913 23:59:58.534763       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:00:08.534236       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:00:08.534283       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:00:08.534425       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:00:08.534444       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:00:08.534556       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:00:08.534576       1 main.go:299] handling current node
	I0914 00:00:08.534587       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:00:08.534592       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08] <==
	I0913 23:53:50.984469       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0913 23:53:50.993069       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132]
	I0913 23:53:50.994839       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 23:53:51.001266       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 23:53:51.102551       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 23:53:52.262937       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 23:53:52.284367       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0913 23:53:52.457740       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 23:53:56.505781       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0913 23:53:56.770308       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0913 23:56:35.798668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49360: use of closed network connection
	E0913 23:56:35.984574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49376: use of closed network connection
	E0913 23:56:36.173865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49384: use of closed network connection
	E0913 23:56:36.369578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49404: use of closed network connection
	E0913 23:56:36.550301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49412: use of closed network connection
	E0913 23:56:36.730848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49424: use of closed network connection
	E0913 23:56:36.917966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49456: use of closed network connection
	E0913 23:56:37.115026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49472: use of closed network connection
	E0913 23:56:37.301864       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49490: use of closed network connection
	E0913 23:56:37.604065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49532: use of closed network connection
	E0913 23:56:37.806772       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49550: use of closed network connection
	E0913 23:56:37.996521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49568: use of closed network connection
	E0913 23:56:38.167786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49584: use of closed network connection
	E0913 23:56:38.338726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49602: use of closed network connection
	E0913 23:56:38.511919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49630: use of closed network connection
	
	
	==> kube-controller-manager [a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96] <==
	I0913 23:57:08.650792       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-817269-m04\" does not exist"
	I0913 23:57:08.710055       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-817269-m04" podCIDRs=["10.244.3.0/24"]
	I0913 23:57:08.710171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:08.710261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:08.885890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:09.309005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:09.838892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.833728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.900896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.953932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.954211       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-817269-m04"
	I0913 23:57:11.016775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:19.068785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.630779       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0913 23:57:29.630879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.645592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.833375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:39.690871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:58:20.863075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0913 23:58:20.863443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:20.885488       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:21.004518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.235766ms"
	I0913 23:58:21.004934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.508µs"
	I0913 23:58:21.012888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:26.184970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	
	
	==> kube-proxy [f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:53:57.656514       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:53:57.683604       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
	E0913 23:53:57.683885       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:53:57.722667       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:53:57.722712       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:53:57.722736       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:53:57.725734       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:53:57.726491       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:53:57.726520       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:53:57.729944       1 config.go:199] "Starting service config controller"
	I0913 23:53:57.730942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:53:57.731195       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:53:57.731248       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:53:57.735705       1 config.go:328] "Starting node config controller"
	I0913 23:53:57.735729       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:53:57.832204       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:53:57.832244       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:53:57.835785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5] <==
	W0913 23:53:50.269686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 23:53:50.270023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 23:53:50.384172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:53:50.384266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 23:53:52.456288       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 23:56:02.084456       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-np2s8\": pod kindnet-np2s8 is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-np2s8" node="ha-817269-m03"
	E0913 23:56:02.084784       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bwr6g\": pod kube-proxy-bwr6g is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bwr6g" node="ha-817269-m03"
	E0913 23:56:02.084831       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-np2s8\": pod kindnet-np2s8 is already assigned to node \"ha-817269-m03\"" pod="kube-system/kindnet-np2s8"
	E0913 23:56:02.084950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 256835a2-a848-4572-9e9f-e99350c07ed2(kube-system/kube-proxy-bwr6g) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bwr6g"
	E0913 23:56:02.084999       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bwr6g\": pod kube-proxy-bwr6g is already assigned to node \"ha-817269-m03\"" pod="kube-system/kube-proxy-bwr6g"
	I0913 23:56:02.085031       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bwr6g" node="ha-817269-m03"
	E0913 23:56:30.809387       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vsts4\": pod busybox-7dff88458-vsts4 is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vsts4" node="ha-817269-m03"
	E0913 23:56:30.813264       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5d1a6d17-44a4-4b61-b86f-4455a16dee23(default/busybox-7dff88458-vsts4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vsts4"
	E0913 23:56:30.814009       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vsts4\": pod busybox-7dff88458-vsts4 is already assigned to node \"ha-817269-m03\"" pod="default/busybox-7dff88458-vsts4"
	I0913 23:56:30.814255       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vsts4" node="ha-817269-m03"
	E0913 23:56:30.847165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wff9f" node="ha-817269-m02"
	E0913 23:56:30.847268       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" pod="default/busybox-7dff88458-wff9f"
	E0913 23:56:30.906194       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:56:30.906282       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e288c7d7-36f3-4fd1-a944-403098141304(default/busybox-7dff88458-5cbmn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5cbmn"
	E0913 23:56:30.906305       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" pod="default/busybox-7dff88458-5cbmn"
	I0913 23:56:30.906349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:57:08.751565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0913 23:57:08.751687       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 234c68a0-c2e4-4784-8bda-6c0a1ffc84db(kube-system/kube-proxy-tdcn8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tdcn8"
	E0913 23:57:08.751719       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" pod="kube-system/kube-proxy-tdcn8"
	I0913 23:57:08.751751       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	
	
	==> kubelet <==
	Sep 13 23:58:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 23:58:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 23:58:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 23:58:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 23:58:52 ha-817269 kubelet[1306]: E0913 23:58:52.499976    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271932499469545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:58:52 ha-817269 kubelet[1306]: E0913 23:58:52.500060    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271932499469545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:02 ha-817269 kubelet[1306]: E0913 23:59:02.502016    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271942501766372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:02 ha-817269 kubelet[1306]: E0913 23:59:02.502337    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271942501766372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:12 ha-817269 kubelet[1306]: E0913 23:59:12.503921    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271952503633094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:12 ha-817269 kubelet[1306]: E0913 23:59:12.504002    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271952503633094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:22 ha-817269 kubelet[1306]: E0913 23:59:22.506058    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271962505776811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:22 ha-817269 kubelet[1306]: E0913 23:59:22.506122    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271962505776811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:32 ha-817269 kubelet[1306]: E0913 23:59:32.507654    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271972507316343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:32 ha-817269 kubelet[1306]: E0913 23:59:32.507998    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271972507316343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:42 ha-817269 kubelet[1306]: E0913 23:59:42.509416    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271982509074235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:42 ha-817269 kubelet[1306]: E0913 23:59:42.509667    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271982509074235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:52 ha-817269 kubelet[1306]: E0913 23:59:52.378383    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 23:59:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 23:59:52 ha-817269 kubelet[1306]: E0913 23:59:52.511505    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271992511060583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:52 ha-817269 kubelet[1306]: E0913 23:59:52.511540    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271992511060583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:02 ha-817269 kubelet[1306]: E0914 00:00:02.513470    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272002513163095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:02 ha-817269 kubelet[1306]: E0914 00:00:02.513775    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272002513163095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-817269 -n ha-817269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-817269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (3.198131715s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:15.613562   30023 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:15.613698   30023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:15.613708   30023 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:15.613712   30023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:15.613914   30023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:15.614095   30023 out.go:352] Setting JSON to false
	I0914 00:00:15.614129   30023 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:15.614249   30023 notify.go:220] Checking for updates...
	I0914 00:00:15.614701   30023 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:15.614722   30023 status.go:255] checking status of ha-817269 ...
	I0914 00:00:15.615215   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.615285   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.631432   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0914 00:00:15.631982   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.632550   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.632569   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.633017   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.633207   30023 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:15.634849   30023 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:15.634870   30023 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:15.635200   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.635247   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.651168   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0914 00:00:15.651705   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.652246   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.652288   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.652672   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.652923   30023 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:15.656035   30023 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:15.656494   30023 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:15.656545   30023 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:15.656661   30023 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:15.656942   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.656989   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.672207   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0914 00:00:15.672756   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.673208   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.673230   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.673595   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.673803   30023 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:15.673995   30023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:15.674037   30023 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:15.677385   30023 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:15.678029   30023 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:15.678055   30023 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:15.678180   30023 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:15.678360   30023 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:15.678528   30023 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:15.678635   30023 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:15.763528   30023 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:15.770830   30023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:15.786949   30023 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:15.786981   30023 api_server.go:166] Checking apiserver status ...
	I0914 00:00:15.787013   30023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:15.802646   30023 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:15.812684   30023 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:15.812763   30023 ssh_runner.go:195] Run: ls
	I0914 00:00:15.818310   30023 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:15.824991   30023 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:15.825018   30023 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:15.825027   30023 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:15.825043   30023 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:15.825450   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.825497   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.840599   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0914 00:00:15.841025   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.841452   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.841475   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.841842   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.842013   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:15.843627   30023 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:15.843645   30023 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:15.843983   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.844030   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.860255   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0914 00:00:15.860699   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.861227   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.861252   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.861596   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.861772   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:15.865249   30023 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:15.865811   30023 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:15.865845   30023 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:15.866000   30023 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:15.866333   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:15.866377   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:15.882300   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0914 00:00:15.882719   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:15.883153   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:15.883176   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:15.883543   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:15.883741   30023 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:15.883970   30023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:15.883994   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:15.887100   30023 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:15.887577   30023 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:15.887606   30023 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:15.887818   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:15.887997   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:15.888124   30023 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:15.888292   30023 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:18.412127   30023 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:18.412234   30023 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:18.412249   30023 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:18.412256   30023 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:18.412276   30023 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:18.412283   30023 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:18.412600   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.412656   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.427574   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0914 00:00:18.428071   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.428596   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.428615   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.428968   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.429176   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:18.430780   30023 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:18.430799   30023 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:18.431224   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.431273   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.446820   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0914 00:00:18.447310   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.447753   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.447772   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.448111   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.448270   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:18.451164   30023 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:18.451565   30023 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:18.451591   30023 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:18.451700   30023 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:18.452042   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.452084   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.467032   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0914 00:00:18.467511   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.468027   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.468050   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.468384   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.468542   30023 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:18.468744   30023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:18.468771   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:18.471403   30023 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:18.471878   30023 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:18.471909   30023 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:18.472048   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:18.472202   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:18.472345   30023 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:18.472475   30023 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:18.554700   30023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:18.569142   30023 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:18.569164   30023 api_server.go:166] Checking apiserver status ...
	I0914 00:00:18.569196   30023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:18.582883   30023 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:18.594718   30023 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:18.594770   30023 ssh_runner.go:195] Run: ls
	I0914 00:00:18.598882   30023 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:18.605925   30023 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:18.605954   30023 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:18.605981   30023 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:18.605999   30023 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:18.606285   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.606328   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.621586   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0914 00:00:18.622081   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.622557   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.622579   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.622879   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.623068   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:18.624790   30023 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:18.624803   30023 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:18.625073   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.625104   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.641670   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0914 00:00:18.642066   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.642575   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.642597   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.642942   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.643137   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:18.647209   30023 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:18.647970   30023 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:18.648002   30023 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:18.648284   30023 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:18.648632   30023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:18.648672   30023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:18.665401   30023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0914 00:00:18.665901   30023 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:18.666470   30023 main.go:141] libmachine: Using API Version  1
	I0914 00:00:18.666493   30023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:18.666816   30023 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:18.667084   30023 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:18.667247   30023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:18.667265   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:18.670787   30023 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:18.671381   30023 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:18.671424   30023 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:18.671586   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:18.671812   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:18.671983   30023 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:18.672196   30023 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:18.751386   30023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:18.765754   30023 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (5.212047746s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:19.734664   30122 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:19.734786   30122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:19.734794   30122 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:19.734798   30122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:19.734962   30122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:19.735124   30122 out.go:352] Setting JSON to false
	I0914 00:00:19.735151   30122 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:19.735252   30122 notify.go:220] Checking for updates...
	I0914 00:00:19.735529   30122 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:19.735542   30122 status.go:255] checking status of ha-817269 ...
	I0914 00:00:19.735955   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.736007   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.754854   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0914 00:00:19.755295   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.755878   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.755898   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:19.756316   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:19.756557   30122 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:19.758296   30122 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:19.758311   30122 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:19.758647   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.758688   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.773891   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0914 00:00:19.774448   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.774913   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.774937   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:19.775254   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:19.775409   30122 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:19.778401   30122 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:19.778859   30122 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:19.778891   30122 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:19.779086   30122 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:19.779396   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.779438   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.795278   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0914 00:00:19.795795   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.796453   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.796475   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:19.796861   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:19.797066   30122 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:19.797268   30122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:19.797298   30122 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:19.800884   30122 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:19.801285   30122 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:19.801306   30122 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:19.801503   30122 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:19.801674   30122 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:19.801864   30122 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:19.802001   30122 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:19.883615   30122 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:19.890176   30122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:19.906339   30122 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:19.906379   30122 api_server.go:166] Checking apiserver status ...
	I0914 00:00:19.906421   30122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:19.922150   30122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:19.932903   30122 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:19.932963   30122 ssh_runner.go:195] Run: ls
	I0914 00:00:19.938049   30122 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:19.944859   30122 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:19.944883   30122 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:19.944892   30122 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:19.944908   30122 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:19.945231   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.945269   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.960198   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0914 00:00:19.960624   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.961160   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.961178   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:19.961496   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:19.961687   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:19.963211   30122 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:19.963227   30122 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:19.963571   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.963614   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.979028   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0914 00:00:19.979466   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.979957   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.979975   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:19.980272   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:19.980451   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:19.983139   30122 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:19.983527   30122 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:19.983547   30122 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:19.983717   30122 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:19.984034   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:19.984085   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:19.998617   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39419
	I0914 00:00:19.999246   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:19.999738   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:19.999757   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:20.000143   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:20.000315   30122 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:20.000487   30122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:20.000504   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:20.003178   30122 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:20.003544   30122 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:20.003579   30122 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:20.003719   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:20.003902   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:20.004041   30122 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:20.004178   30122 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:21.484059   30122 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:21.484136   30122 retry.go:31] will retry after 137.956114ms: dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:24.556036   30122 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:24.556132   30122 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:24.556155   30122 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:24.556165   30122 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:24.556201   30122 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:24.556216   30122 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:24.556700   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.556779   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.571903   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0914 00:00:24.572378   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.572874   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.572894   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.573192   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.573383   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:24.574728   30122 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:24.574742   30122 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:24.575047   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.575105   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.590967   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0914 00:00:24.591341   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.591910   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.591932   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.592246   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.592451   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:24.595572   30122 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:24.595982   30122 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:24.596009   30122 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:24.596160   30122 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:24.596459   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.596494   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.612055   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0914 00:00:24.612631   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.613118   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.613141   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.613529   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.613727   30122 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:24.613904   30122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:24.613936   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:24.617054   30122 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:24.617471   30122 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:24.617504   30122 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:24.617656   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:24.617785   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:24.617940   30122 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:24.618063   30122 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:24.698977   30122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:24.713563   30122 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:24.713593   30122 api_server.go:166] Checking apiserver status ...
	I0914 00:00:24.713625   30122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:24.727244   30122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:24.737045   30122 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:24.737103   30122 ssh_runner.go:195] Run: ls
	I0914 00:00:24.741968   30122 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:24.747450   30122 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:24.747474   30122 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:24.747485   30122 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:24.747515   30122 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:24.747857   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.747900   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.764153   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 00:00:24.764669   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.765174   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.765196   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.765572   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.765760   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:24.767693   30122 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:24.767708   30122 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:24.768067   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.768101   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.783065   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I0914 00:00:24.783558   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.784036   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.784059   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.784384   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.784594   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:24.787515   30122 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:24.788180   30122 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:24.788207   30122 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:24.788365   30122 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:24.788678   30122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:24.788726   30122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:24.805677   30122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0914 00:00:24.806049   30122 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:24.806527   30122 main.go:141] libmachine: Using API Version  1
	I0914 00:00:24.806547   30122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:24.806853   30122 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:24.807071   30122 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:24.807280   30122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:24.807303   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:24.810577   30122 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:24.811073   30122 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:24.811103   30122 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:24.811290   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:24.811479   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:24.811655   30122 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:24.811837   30122 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:24.891160   30122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:24.905268   30122 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (5.016991827s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:26.363086   30239 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:26.363539   30239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:26.363594   30239 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:26.363612   30239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:26.364088   30239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:26.364464   30239 out.go:352] Setting JSON to false
	I0914 00:00:26.364507   30239 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:26.364659   30239 notify.go:220] Checking for updates...
	I0914 00:00:26.365312   30239 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:26.365330   30239 status.go:255] checking status of ha-817269 ...
	I0914 00:00:26.365763   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.365802   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.382293   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0914 00:00:26.382788   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.383313   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.383336   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.383838   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.384038   30239 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:26.385716   30239 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:26.385734   30239 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:26.386178   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.386235   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.402108   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0914 00:00:26.402663   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.403238   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.403276   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.403610   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.403867   30239 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:26.406900   30239 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:26.407295   30239 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:26.407320   30239 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:26.407481   30239 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:26.407773   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.407838   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.423122   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0914 00:00:26.423661   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.424224   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.424239   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.424569   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.424726   30239 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:26.424896   30239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:26.424926   30239 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:26.428247   30239 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:26.428697   30239 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:26.428731   30239 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:26.428862   30239 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:26.429026   30239 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:26.429255   30239 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:26.429509   30239 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:26.515868   30239 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:26.522402   30239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:26.537097   30239 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:26.537138   30239 api_server.go:166] Checking apiserver status ...
	I0914 00:00:26.537181   30239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:26.552859   30239 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:26.566553   30239 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:26.566621   30239 ssh_runner.go:195] Run: ls
	I0914 00:00:26.574592   30239 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:26.579868   30239 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:26.579892   30239 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:26.579902   30239 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:26.579917   30239 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:26.580195   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.580233   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.595812   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0914 00:00:26.596178   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.596670   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.596693   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.596995   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.597161   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:26.598560   30239 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:26.598576   30239 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:26.598846   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.598882   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.615544   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0914 00:00:26.615943   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.616436   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.616460   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.616805   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.616978   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:26.619676   30239 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:26.620121   30239 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:26.620136   30239 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:26.620343   30239 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:26.620647   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:26.620691   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:26.635657   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0914 00:00:26.636147   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:26.636633   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:26.636660   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:26.637016   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:26.637197   30239 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:26.637387   30239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:26.637411   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:26.640480   30239 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:26.641033   30239 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:26.641062   30239 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:26.641309   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:26.641513   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:26.641648   30239 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:26.641755   30239 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:27.628033   30239 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:27.628098   30239 retry.go:31] will retry after 280.216528ms: dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:30.988030   30239 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:30.988147   30239 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:30.988170   30239 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:30.988180   30239 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:30.988206   30239 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:30.988218   30239 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:30.988551   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:30.988616   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.003521   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0914 00:00:31.003916   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.004429   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.004454   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.004780   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.004965   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:31.006614   30239 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:31.006633   30239 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:31.007027   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:31.007074   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.021936   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0914 00:00:31.022397   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.022928   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.022951   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.023279   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.023460   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:31.026277   30239 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:31.026835   30239 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:31.026868   30239 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:31.027015   30239 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:31.027359   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:31.027393   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.043869   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0914 00:00:31.044321   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.044830   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.044853   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.045129   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.045314   30239 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:31.045485   30239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:31.045513   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:31.048310   30239 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:31.048752   30239 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:31.048785   30239 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:31.048973   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:31.049166   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:31.049298   30239 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:31.049466   30239 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:31.130784   30239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:31.145853   30239 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:31.145889   30239 api_server.go:166] Checking apiserver status ...
	I0914 00:00:31.145933   30239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:31.160099   30239 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:31.170547   30239 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:31.170603   30239 ssh_runner.go:195] Run: ls
	I0914 00:00:31.175238   30239 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:31.181654   30239 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:31.181679   30239 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:31.181688   30239 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:31.181702   30239 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:31.181973   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:31.182009   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.198889   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0914 00:00:31.199344   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.199836   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.199863   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.200171   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.200350   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:31.201883   30239 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:31.201897   30239 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:31.202165   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:31.202211   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.217469   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0914 00:00:31.217851   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.218278   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.218297   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.218608   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.218771   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:31.221662   30239 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:31.222113   30239 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:31.222148   30239 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:31.222324   30239 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:31.222614   30239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:31.222652   30239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:31.237427   30239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0914 00:00:31.237809   30239 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:31.238236   30239 main.go:141] libmachine: Using API Version  1
	I0914 00:00:31.238262   30239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:31.238549   30239 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:31.238761   30239 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:31.238930   30239 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:31.238949   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:31.241961   30239 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:31.242439   30239 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:31.242479   30239 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:31.242652   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:31.242853   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:31.243026   30239 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:31.243192   30239 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:31.323200   30239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:31.337791   30239 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (4.389753567s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:33.400000   30340 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:33.400292   30340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:33.400303   30340 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:33.400314   30340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:33.400513   30340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:33.400715   30340 out.go:352] Setting JSON to false
	I0914 00:00:33.400761   30340 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:33.400872   30340 notify.go:220] Checking for updates...
	I0914 00:00:33.401341   30340 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:33.401366   30340 status.go:255] checking status of ha-817269 ...
	I0914 00:00:33.401856   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.401899   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.417626   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38825
	I0914 00:00:33.418109   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.418639   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.418664   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.419111   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.419323   30340 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:33.421121   30340 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:33.421139   30340 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:33.421580   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.421627   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.436940   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0914 00:00:33.437407   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.437924   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.437961   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.438311   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.438571   30340 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:33.441799   30340 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:33.442294   30340 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:33.442330   30340 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:33.442483   30340 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:33.442778   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.442832   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.459338   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0914 00:00:33.459823   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.460334   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.460360   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.460715   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.460928   30340 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:33.461167   30340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:33.461201   30340 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:33.464008   30340 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:33.464517   30340 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:33.464543   30340 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:33.464723   30340 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:33.464875   30340 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:33.465031   30340 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:33.465152   30340 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:33.549486   30340 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:33.558980   30340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:33.577139   30340 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:33.577179   30340 api_server.go:166] Checking apiserver status ...
	I0914 00:00:33.577221   30340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:33.592940   30340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:33.602930   30340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:33.602984   30340 ssh_runner.go:195] Run: ls
	I0914 00:00:33.608351   30340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:33.614711   30340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:33.614741   30340 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:33.614752   30340 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:33.614773   30340 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:33.615164   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.615203   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.630367   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0914 00:00:33.630836   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.631315   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.631331   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.631663   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.631918   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:33.633713   30340 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:33.633732   30340 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:33.634015   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.634077   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.650120   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I0914 00:00:33.650611   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.651140   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.651162   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.651510   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.651702   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:33.654552   30340 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:33.654999   30340 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:33.655028   30340 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:33.655137   30340 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:33.655430   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:33.655465   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:33.670277   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0914 00:00:33.670738   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:33.671215   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:33.671235   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:33.671576   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:33.671737   30340 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:33.671921   30340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:33.671942   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:33.675028   30340 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:33.675443   30340 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:33.675468   30340 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:33.675622   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:33.675947   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:33.676135   30340 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:33.676285   30340 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:34.060053   30340 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:34.060101   30340 retry.go:31] will retry after 247.655751ms: dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:37.392060   30340 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:37.392139   30340 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:37.392153   30340 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:37.392164   30340 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:37.392182   30340 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:37.392190   30340 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:37.392532   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.392579   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.407777   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0914 00:00:37.408194   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.408658   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.408682   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.408989   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.409147   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:37.410644   30340 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:37.410661   30340 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:37.410950   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.410983   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.426828   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0914 00:00:37.427210   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.427753   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.427781   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.428084   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.428262   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:37.431693   30340 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:37.432170   30340 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:37.432192   30340 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:37.432397   30340 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:37.432734   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.432781   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.448103   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0914 00:00:37.448627   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.449220   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.449246   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.449617   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.449800   30340 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:37.449976   30340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:37.449997   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:37.452972   30340 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:37.453440   30340 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:37.453471   30340 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:37.453614   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:37.453798   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:37.453932   30340 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:37.454050   30340 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:37.535288   30340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:37.552281   30340 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:37.552312   30340 api_server.go:166] Checking apiserver status ...
	I0914 00:00:37.552382   30340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:37.566800   30340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:37.576244   30340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:37.576301   30340 ssh_runner.go:195] Run: ls
	I0914 00:00:37.580455   30340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:37.587744   30340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:37.587767   30340 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:37.587775   30340 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:37.587804   30340 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:37.588090   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.588123   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.605637   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I0914 00:00:37.606138   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.606684   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.606707   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.607003   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.607322   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:37.608862   30340 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:37.608878   30340 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:37.609178   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.609237   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.624381   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0914 00:00:37.624772   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.625434   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.625461   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.625864   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.626057   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:37.629341   30340 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:37.629755   30340 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:37.629796   30340 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:37.629912   30340 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:37.630208   30340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:37.630243   30340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:37.645151   30340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0914 00:00:37.645608   30340 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:37.646031   30340 main.go:141] libmachine: Using API Version  1
	I0914 00:00:37.646051   30340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:37.646323   30340 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:37.646510   30340 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:37.646701   30340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:37.646721   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:37.649441   30340 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:37.649943   30340 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:37.649977   30340 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:37.650129   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:37.650334   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:37.650474   30340 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:37.650609   30340 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:37.730956   30340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:37.745834   30340 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (3.736156163s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:42.174581   30456 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:42.174696   30456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:42.174707   30456 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:42.174714   30456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:42.174930   30456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:42.175158   30456 out.go:352] Setting JSON to false
	I0914 00:00:42.175191   30456 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:42.175352   30456 notify.go:220] Checking for updates...
	I0914 00:00:42.175706   30456 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:42.175724   30456 status.go:255] checking status of ha-817269 ...
	I0914 00:00:42.176184   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.176244   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.194823   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46355
	I0914 00:00:42.195293   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.195948   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.195986   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.196340   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.196538   30456 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:42.198285   30456 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:42.198307   30456 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:42.198589   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.198645   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.213460   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0914 00:00:42.213884   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.214396   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.214432   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.214761   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.214926   30456 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:42.217770   30456 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:42.218165   30456 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:42.218193   30456 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:42.218308   30456 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:42.218591   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.218623   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.233724   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34975
	I0914 00:00:42.234136   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.234657   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.234683   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.234989   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.235147   30456 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:42.235345   30456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:42.235388   30456 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:42.238348   30456 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:42.238841   30456 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:42.238878   30456 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:42.239036   30456 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:42.239227   30456 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:42.239387   30456 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:42.239539   30456 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:42.324606   30456 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:42.331213   30456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:42.345966   30456 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:42.346002   30456 api_server.go:166] Checking apiserver status ...
	I0914 00:00:42.346033   30456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:42.365548   30456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:42.375712   30456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:42.375780   30456 ssh_runner.go:195] Run: ls
	I0914 00:00:42.380646   30456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:42.387033   30456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:42.387067   30456 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:42.387082   30456 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:42.387132   30456 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:42.387592   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.387638   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.405512   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0914 00:00:42.406009   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.406546   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.406566   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.406911   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.407139   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:42.408875   30456 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:42.408891   30456 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:42.409280   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.409318   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.424395   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0914 00:00:42.424871   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.425431   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.425448   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.425802   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.425980   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:42.428945   30456 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:42.429368   30456 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:42.429413   30456 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:42.429529   30456 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:42.429856   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:42.429895   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:42.445519   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0914 00:00:42.445982   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:42.446449   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:42.446470   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:42.446765   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:42.446967   30456 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:42.447165   30456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:42.447186   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:42.450307   30456 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:42.450820   30456 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:42.450847   30456 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:42.451065   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:42.451233   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:42.451362   30456 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:42.451493   30456 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:45.516001   30456 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:45.516114   30456 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:45.516132   30456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:45.516138   30456 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:45.516155   30456 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:45.516162   30456 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:45.516485   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.516534   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.531909   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0914 00:00:45.532392   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.532917   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.532943   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.533235   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.533408   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:45.535060   30456 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:45.535074   30456 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:45.535391   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.535429   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.550773   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0914 00:00:45.551252   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.551738   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.551759   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.552047   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.552213   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:45.554866   30456 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:45.555264   30456 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:45.555293   30456 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:45.555425   30456 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:45.555742   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.555799   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.571564   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0914 00:00:45.572023   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.572498   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.572526   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.572825   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.572984   30456 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:45.573147   30456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:45.573168   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:45.576076   30456 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:45.576510   30456 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:45.576530   30456 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:45.576680   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:45.576858   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:45.576975   30456 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:45.577097   30456 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:45.659194   30456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:45.676977   30456 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:45.677004   30456 api_server.go:166] Checking apiserver status ...
	I0914 00:00:45.677036   30456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:45.693151   30456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:45.704771   30456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:45.704818   30456 ssh_runner.go:195] Run: ls
	I0914 00:00:45.709556   30456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:45.715593   30456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:45.715624   30456 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:45.715633   30456 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:45.715649   30456 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:45.716049   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.716094   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.731250   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0914 00:00:45.731688   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.732145   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.732169   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.732463   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.732656   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:45.734129   30456 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:45.734142   30456 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:45.734440   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.734482   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.749448   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0914 00:00:45.749918   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.750383   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.750404   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.750714   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.750865   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:45.753923   30456 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:45.754327   30456 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:45.754354   30456 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:45.754506   30456 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:45.754842   30456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:45.754881   30456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:45.769661   30456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0914 00:00:45.770198   30456 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:45.770726   30456 main.go:141] libmachine: Using API Version  1
	I0914 00:00:45.770752   30456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:45.771133   30456 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:45.771329   30456 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:45.771524   30456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:45.771546   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:45.774338   30456 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:45.774855   30456 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:45.774875   30456 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:45.775096   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:45.775261   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:45.775426   30456 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:45.775554   30456 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:45.854713   30456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:45.869085   30456 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (3.722219954s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:00:48.750562   30557 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:00:48.750680   30557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:48.750690   30557 out.go:358] Setting ErrFile to fd 2...
	I0914 00:00:48.750696   30557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:00:48.750879   30557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:00:48.751070   30557 out.go:352] Setting JSON to false
	I0914 00:00:48.751103   30557 mustload.go:65] Loading cluster: ha-817269
	I0914 00:00:48.751204   30557 notify.go:220] Checking for updates...
	I0914 00:00:48.751538   30557 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:00:48.751555   30557 status.go:255] checking status of ha-817269 ...
	I0914 00:00:48.752022   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:48.752069   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:48.770111   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0914 00:00:48.770593   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:48.771176   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:48.771208   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:48.771499   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:48.771663   30557 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:00:48.773173   30557 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:00:48.773192   30557 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:48.773526   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:48.773565   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:48.788195   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0914 00:00:48.788641   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:48.789127   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:48.789149   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:48.789441   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:48.789645   30557 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:00:48.792389   30557 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:48.792792   30557 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:48.792817   30557 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:48.792921   30557 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:00:48.793202   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:48.793262   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:48.808897   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I0914 00:00:48.809297   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:48.809772   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:48.809795   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:48.810251   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:48.810449   30557 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:00:48.810630   30557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:48.810660   30557 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:00:48.813530   30557 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:48.813879   30557 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:00:48.813910   30557 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:00:48.814013   30557 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:00:48.814192   30557 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:00:48.814338   30557 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:00:48.814489   30557 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:00:48.900447   30557 ssh_runner.go:195] Run: systemctl --version
	I0914 00:00:48.906269   30557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:48.920058   30557 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:48.920093   30557 api_server.go:166] Checking apiserver status ...
	I0914 00:00:48.920125   30557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:48.935829   30557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:00:48.945918   30557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:48.945971   30557 ssh_runner.go:195] Run: ls
	I0914 00:00:48.952177   30557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:48.958556   30557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:48.958583   30557 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:00:48.958592   30557 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:48.958606   30557 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:00:48.958902   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:48.958936   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:48.974169   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0914 00:00:48.974727   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:48.975288   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:48.975315   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:48.975650   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:48.975888   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:00:48.977660   30557 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:00:48.977676   30557 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:48.977981   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:48.978030   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:48.994306   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0914 00:00:48.994781   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:48.995231   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:48.995246   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:48.995569   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:48.995758   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:00:48.998760   30557 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:48.999263   30557 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:48.999295   30557 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:48.999485   30557 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:00:48.999949   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:49.000000   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:49.015302   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I0914 00:00:49.015819   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:49.016311   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:49.016331   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:49.016668   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:49.016853   30557 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:00:49.017031   30557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:49.017048   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:00:49.019627   30557 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:49.020041   30557 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:00:49.020066   30557 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:00:49.020201   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:00:49.020367   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:00:49.020529   30557 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:00:49.020692   30557 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	W0914 00:00:52.076061   30557 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.6:22: connect: no route to host
	W0914 00:00:52.076167   30557 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0914 00:00:52.076184   30557 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:52.076194   30557 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:00:52.076210   30557 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	I0914 00:00:52.076218   30557 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:00:52.076545   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.076596   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.092059   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0914 00:00:52.092713   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.093223   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.093243   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.093632   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.093843   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:00:52.095734   30557 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:00:52.095755   30557 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:52.096253   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.096321   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.111270   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0914 00:00:52.111729   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.112230   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.112274   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.112591   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.112769   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:00:52.116119   30557 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:52.116581   30557 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:52.116608   30557 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:52.116757   30557 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:00:52.117050   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.117084   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.132114   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0914 00:00:52.132600   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.133074   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.133093   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.133396   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.133579   30557 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:00:52.133729   30557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:52.133748   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:00:52.136594   30557 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:52.136995   30557 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:00:52.137010   30557 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:00:52.137203   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:00:52.137389   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:00:52.137526   30557 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:00:52.137635   30557 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:00:52.223603   30557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:52.237505   30557 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:00:52.237535   30557 api_server.go:166] Checking apiserver status ...
	I0914 00:00:52.237564   30557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:00:52.252915   30557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:00:52.262326   30557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:00:52.262379   30557 ssh_runner.go:195] Run: ls
	I0914 00:00:52.266624   30557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:00:52.271343   30557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:00:52.271367   30557 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:00:52.271378   30557 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:00:52.271397   30557 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:00:52.271774   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.271828   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.287521   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0914 00:00:52.288063   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.288566   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.288589   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.288903   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.289082   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:00:52.290804   30557 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:00:52.290822   30557 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:52.291210   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.291263   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.306106   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0914 00:00:52.306670   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.307145   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.307167   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.307629   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.307820   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:00:52.310585   30557 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:52.311030   30557 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:52.311064   30557 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:52.311199   30557 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:00:52.311620   30557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:00:52.311665   30557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:00:52.326855   30557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0914 00:00:52.327352   30557 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:00:52.327812   30557 main.go:141] libmachine: Using API Version  1
	I0914 00:00:52.327835   30557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:00:52.328207   30557 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:00:52.328454   30557 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:00:52.328655   30557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:00:52.328677   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:00:52.331537   30557 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:52.331992   30557 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:00:52.332012   30557 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:00:52.332201   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:00:52.332380   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:00:52.332524   30557 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:00:52.332630   30557 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:00:52.415057   30557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:00:52.429801   30557 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 7 (613.185369ms)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-817269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:01:03.790482   30708 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:01:03.790591   30708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:01:03.790599   30708 out.go:358] Setting ErrFile to fd 2...
	I0914 00:01:03.790603   30708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:01:03.790779   30708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:01:03.790933   30708 out.go:352] Setting JSON to false
	I0914 00:01:03.790960   30708 mustload.go:65] Loading cluster: ha-817269
	I0914 00:01:03.791091   30708 notify.go:220] Checking for updates...
	I0914 00:01:03.791362   30708 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:01:03.791376   30708 status.go:255] checking status of ha-817269 ...
	I0914 00:01:03.791855   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:03.791914   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:03.809890   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0914 00:01:03.810475   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:03.811127   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:03.811155   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:03.811507   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:03.811699   30708 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:01:03.813297   30708 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:01:03.813321   30708 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:01:03.813759   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:03.813812   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:03.828975   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0914 00:01:03.829484   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:03.829935   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:03.829961   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:03.830266   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:03.830451   30708 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:01:03.833093   30708 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:01:03.833557   30708 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:01:03.833583   30708 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:01:03.833733   30708 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:01:03.834060   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:03.834100   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:03.849161   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0914 00:01:03.849603   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:03.850074   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:03.850095   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:03.850442   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:03.850663   30708 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:01:03.850833   30708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:01:03.850861   30708 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:01:03.853809   30708 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:01:03.854255   30708 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:01:03.854280   30708 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:01:03.854512   30708 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:01:03.854719   30708 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:01:03.854889   30708 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:01:03.855019   30708 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:01:03.939194   30708 ssh_runner.go:195] Run: systemctl --version
	I0914 00:01:03.945120   30708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:01:03.959437   30708 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:01:03.959470   30708 api_server.go:166] Checking apiserver status ...
	I0914 00:01:03.959503   30708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:01:03.972838   30708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0914 00:01:03.982059   30708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:01:03.982124   30708 ssh_runner.go:195] Run: ls
	I0914 00:01:03.986458   30708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:01:03.990786   30708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:01:03.990811   30708 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:01:03.990823   30708 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:01:03.990843   30708 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:01:03.991251   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:03.991299   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.007200   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 00:01:04.007705   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.008195   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.008217   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.008542   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.008720   30708 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:01:04.010336   30708 status.go:330] ha-817269-m02 host status = "Stopped" (err=<nil>)
	I0914 00:01:04.010351   30708 status.go:343] host is not running, skipping remaining checks
	I0914 00:01:04.010359   30708 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:01:04.010381   30708 status.go:255] checking status of ha-817269-m03 ...
	I0914 00:01:04.010675   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.010711   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.026629   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0914 00:01:04.027080   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.027585   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.027607   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.027969   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.028154   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:01:04.029710   30708 status.go:330] ha-817269-m03 host status = "Running" (err=<nil>)
	I0914 00:01:04.029723   30708 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:01:04.030009   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.030054   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.044926   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0914 00:01:04.045419   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.045941   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.045960   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.046294   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.046475   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0914 00:01:04.049210   30708 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:04.049635   30708 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:01:04.049654   30708 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:04.049799   30708 host.go:66] Checking if "ha-817269-m03" exists ...
	I0914 00:01:04.050103   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.050150   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.065540   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0914 00:01:04.066106   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.066629   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.066649   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.066985   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.067207   30708 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:01:04.067442   30708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:01:04.067462   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:01:04.070533   30708 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:04.071093   30708 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:01:04.071125   30708 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:04.071294   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:01:04.071516   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:01:04.071676   30708 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:01:04.071839   30708 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:01:04.156014   30708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:01:04.172928   30708 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:01:04.172952   30708 api_server.go:166] Checking apiserver status ...
	I0914 00:01:04.172984   30708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:01:04.186549   30708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W0914 00:01:04.196960   30708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:01:04.197021   30708 ssh_runner.go:195] Run: ls
	I0914 00:01:04.201807   30708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:01:04.206063   30708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:01:04.206085   30708 status.go:422] ha-817269-m03 apiserver status = Running (err=<nil>)
	I0914 00:01:04.206100   30708 status.go:257] ha-817269-m03 status: &{Name:ha-817269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:01:04.206115   30708 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:01:04.206400   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.206429   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.221473   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0914 00:01:04.221875   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.222487   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.222517   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.222821   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.223008   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:01:04.224607   30708 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:01:04.224623   30708 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:01:04.224886   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.224921   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.240347   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0914 00:01:04.240753   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.241175   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.241192   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.241480   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.241683   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:01:04.244407   30708 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:04.244782   30708 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:01:04.244809   30708 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:04.244915   30708 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:01:04.245218   30708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:04.245253   30708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:04.260218   30708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I0914 00:01:04.260680   30708 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:04.261106   30708 main.go:141] libmachine: Using API Version  1
	I0914 00:01:04.261127   30708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:04.261426   30708 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:04.261611   30708 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:01:04.261778   30708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:01:04.261802   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:01:04.264165   30708 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:04.264560   30708 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:01:04.264594   30708 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:04.264706   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:01:04.264865   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:01:04.265006   30708 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:01:04.265113   30708 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:01:04.346473   30708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:01:04.361351   30708 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-817269 -n ha-817269
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-817269 logs -n 25: (1.325338408s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m03_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m04 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp testdata/cp-test.txt                                               | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m04_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03:/home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m03 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-817269 node stop m02 -v=7                                                    | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-817269 node start m02 -v=7                                                   | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:53:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:53:10.992229   25213 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:53:10.992351   25213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:10.992359   25213 out.go:358] Setting ErrFile to fd 2...
	I0913 23:53:10.992364   25213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:10.992582   25213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:53:10.993182   25213 out.go:352] Setting JSON to false
	I0913 23:53:10.994007   25213 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2137,"bootTime":1726269454,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:53:10.994114   25213 start.go:139] virtualization: kvm guest
	I0913 23:53:10.996352   25213 out.go:177] * [ha-817269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:53:10.997878   25213 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:53:10.997885   25213 notify.go:220] Checking for updates...
	I0913 23:53:11.000664   25213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:53:11.001976   25213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:11.003286   25213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.004578   25213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:53:11.005770   25213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:53:11.007008   25213 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:53:11.043705   25213 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 23:53:11.045285   25213 start.go:297] selected driver: kvm2
	I0913 23:53:11.045307   25213 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:53:11.045322   25213 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:53:11.046039   25213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:53:11.046135   25213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:53:11.062537   25213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:53:11.062601   25213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:53:11.062838   25213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:53:11.062868   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:11.062912   25213 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 23:53:11.062918   25213 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 23:53:11.062975   25213 start.go:340] cluster config:
	{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0913 23:53:11.063101   25213 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:53:11.065303   25213 out.go:177] * Starting "ha-817269" primary control-plane node in "ha-817269" cluster
	I0913 23:53:11.066558   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:11.066607   25213 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:53:11.066629   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:53:11.066745   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:53:11.066759   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:53:11.067057   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:11.067078   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json: {Name:mk941005e99ea2467f0024292cb50e3b0a4dc797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:11.067247   25213 start.go:360] acquireMachinesLock for ha-817269: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:53:11.067307   25213 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "ha-817269"
	I0913 23:53:11.067333   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:11.067393   25213 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 23:53:11.069056   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:53:11.069210   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:11.069254   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:11.084868   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0913 23:53:11.085427   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:11.086011   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:11.086031   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:11.086441   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:11.086625   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:11.086765   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:11.086927   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:53:11.086958   25213 client.go:168] LocalClient.Create starting
	I0913 23:53:11.086997   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:53:11.087038   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:11.087055   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:11.087115   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:53:11.087141   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:11.087157   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:11.087178   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:53:11.087188   25213 main.go:141] libmachine: (ha-817269) Calling .PreCreateCheck
	I0913 23:53:11.087510   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:11.088023   25213 main.go:141] libmachine: Creating machine...
	I0913 23:53:11.088037   25213 main.go:141] libmachine: (ha-817269) Calling .Create
	I0913 23:53:11.088224   25213 main.go:141] libmachine: (ha-817269) Creating KVM machine...
	I0913 23:53:11.089691   25213 main.go:141] libmachine: (ha-817269) DBG | found existing default KVM network
	I0913 23:53:11.090458   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.090231   25236 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0913 23:53:11.090491   25213 main.go:141] libmachine: (ha-817269) DBG | created network xml: 
	I0913 23:53:11.090503   25213 main.go:141] libmachine: (ha-817269) DBG | <network>
	I0913 23:53:11.090521   25213 main.go:141] libmachine: (ha-817269) DBG |   <name>mk-ha-817269</name>
	I0913 23:53:11.090538   25213 main.go:141] libmachine: (ha-817269) DBG |   <dns enable='no'/>
	I0913 23:53:11.090549   25213 main.go:141] libmachine: (ha-817269) DBG |   
	I0913 23:53:11.090556   25213 main.go:141] libmachine: (ha-817269) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 23:53:11.090559   25213 main.go:141] libmachine: (ha-817269) DBG |     <dhcp>
	I0913 23:53:11.090565   25213 main.go:141] libmachine: (ha-817269) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 23:53:11.090572   25213 main.go:141] libmachine: (ha-817269) DBG |     </dhcp>
	I0913 23:53:11.090581   25213 main.go:141] libmachine: (ha-817269) DBG |   </ip>
	I0913 23:53:11.090586   25213 main.go:141] libmachine: (ha-817269) DBG |   
	I0913 23:53:11.090593   25213 main.go:141] libmachine: (ha-817269) DBG | </network>
	I0913 23:53:11.090599   25213 main.go:141] libmachine: (ha-817269) DBG | 
	I0913 23:53:11.095940   25213 main.go:141] libmachine: (ha-817269) DBG | trying to create private KVM network mk-ha-817269 192.168.39.0/24...
	I0913 23:53:11.163359   25213 main.go:141] libmachine: (ha-817269) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 ...
	I0913 23:53:11.163415   25213 main.go:141] libmachine: (ha-817269) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:53:11.163429   25213 main.go:141] libmachine: (ha-817269) DBG | private KVM network mk-ha-817269 192.168.39.0/24 created
	I0913 23:53:11.163449   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.163328   25236 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.163475   25213 main.go:141] libmachine: (ha-817269) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:53:11.414995   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.414842   25236 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa...
	I0913 23:53:11.595971   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.595821   25236 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/ha-817269.rawdisk...
	I0913 23:53:11.596001   25213 main.go:141] libmachine: (ha-817269) DBG | Writing magic tar header
	I0913 23:53:11.596011   25213 main.go:141] libmachine: (ha-817269) DBG | Writing SSH key tar header
	I0913 23:53:11.596018   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:11.595948   25236 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 ...
	I0913 23:53:11.596100   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269
	I0913 23:53:11.596126   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269 (perms=drwx------)
	I0913 23:53:11.596136   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:53:11.596152   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:53:11.596167   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:53:11.596181   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:53:11.596198   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:53:11.596208   25213 main.go:141] libmachine: (ha-817269) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:53:11.596219   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:11.596234   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:53:11.596243   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:53:11.596254   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:53:11.596263   25213 main.go:141] libmachine: (ha-817269) DBG | Checking permissions on dir: /home
	I0913 23:53:11.596289   25213 main.go:141] libmachine: (ha-817269) DBG | Skipping /home - not owner
	I0913 23:53:11.596303   25213 main.go:141] libmachine: (ha-817269) Creating domain...
	I0913 23:53:11.597341   25213 main.go:141] libmachine: (ha-817269) define libvirt domain using xml: 
	I0913 23:53:11.597364   25213 main.go:141] libmachine: (ha-817269) <domain type='kvm'>
	I0913 23:53:11.597374   25213 main.go:141] libmachine: (ha-817269)   <name>ha-817269</name>
	I0913 23:53:11.597381   25213 main.go:141] libmachine: (ha-817269)   <memory unit='MiB'>2200</memory>
	I0913 23:53:11.597389   25213 main.go:141] libmachine: (ha-817269)   <vcpu>2</vcpu>
	I0913 23:53:11.597395   25213 main.go:141] libmachine: (ha-817269)   <features>
	I0913 23:53:11.597403   25213 main.go:141] libmachine: (ha-817269)     <acpi/>
	I0913 23:53:11.597409   25213 main.go:141] libmachine: (ha-817269)     <apic/>
	I0913 23:53:11.597415   25213 main.go:141] libmachine: (ha-817269)     <pae/>
	I0913 23:53:11.597429   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597437   25213 main.go:141] libmachine: (ha-817269)   </features>
	I0913 23:53:11.597441   25213 main.go:141] libmachine: (ha-817269)   <cpu mode='host-passthrough'>
	I0913 23:53:11.597445   25213 main.go:141] libmachine: (ha-817269)   
	I0913 23:53:11.597451   25213 main.go:141] libmachine: (ha-817269)   </cpu>
	I0913 23:53:11.597481   25213 main.go:141] libmachine: (ha-817269)   <os>
	I0913 23:53:11.597515   25213 main.go:141] libmachine: (ha-817269)     <type>hvm</type>
	I0913 23:53:11.597528   25213 main.go:141] libmachine: (ha-817269)     <boot dev='cdrom'/>
	I0913 23:53:11.597539   25213 main.go:141] libmachine: (ha-817269)     <boot dev='hd'/>
	I0913 23:53:11.597573   25213 main.go:141] libmachine: (ha-817269)     <bootmenu enable='no'/>
	I0913 23:53:11.597590   25213 main.go:141] libmachine: (ha-817269)   </os>
	I0913 23:53:11.597596   25213 main.go:141] libmachine: (ha-817269)   <devices>
	I0913 23:53:11.597605   25213 main.go:141] libmachine: (ha-817269)     <disk type='file' device='cdrom'>
	I0913 23:53:11.597615   25213 main.go:141] libmachine: (ha-817269)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/boot2docker.iso'/>
	I0913 23:53:11.597622   25213 main.go:141] libmachine: (ha-817269)       <target dev='hdc' bus='scsi'/>
	I0913 23:53:11.597627   25213 main.go:141] libmachine: (ha-817269)       <readonly/>
	I0913 23:53:11.597634   25213 main.go:141] libmachine: (ha-817269)     </disk>
	I0913 23:53:11.597640   25213 main.go:141] libmachine: (ha-817269)     <disk type='file' device='disk'>
	I0913 23:53:11.597648   25213 main.go:141] libmachine: (ha-817269)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:53:11.597655   25213 main.go:141] libmachine: (ha-817269)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/ha-817269.rawdisk'/>
	I0913 23:53:11.597662   25213 main.go:141] libmachine: (ha-817269)       <target dev='hda' bus='virtio'/>
	I0913 23:53:11.597667   25213 main.go:141] libmachine: (ha-817269)     </disk>
	I0913 23:53:11.597673   25213 main.go:141] libmachine: (ha-817269)     <interface type='network'>
	I0913 23:53:11.597678   25213 main.go:141] libmachine: (ha-817269)       <source network='mk-ha-817269'/>
	I0913 23:53:11.597686   25213 main.go:141] libmachine: (ha-817269)       <model type='virtio'/>
	I0913 23:53:11.597695   25213 main.go:141] libmachine: (ha-817269)     </interface>
	I0913 23:53:11.597704   25213 main.go:141] libmachine: (ha-817269)     <interface type='network'>
	I0913 23:53:11.597712   25213 main.go:141] libmachine: (ha-817269)       <source network='default'/>
	I0913 23:53:11.597716   25213 main.go:141] libmachine: (ha-817269)       <model type='virtio'/>
	I0913 23:53:11.597722   25213 main.go:141] libmachine: (ha-817269)     </interface>
	I0913 23:53:11.597732   25213 main.go:141] libmachine: (ha-817269)     <serial type='pty'>
	I0913 23:53:11.597740   25213 main.go:141] libmachine: (ha-817269)       <target port='0'/>
	I0913 23:53:11.597744   25213 main.go:141] libmachine: (ha-817269)     </serial>
	I0913 23:53:11.597751   25213 main.go:141] libmachine: (ha-817269)     <console type='pty'>
	I0913 23:53:11.597755   25213 main.go:141] libmachine: (ha-817269)       <target type='serial' port='0'/>
	I0913 23:53:11.597763   25213 main.go:141] libmachine: (ha-817269)     </console>
	I0913 23:53:11.597769   25213 main.go:141] libmachine: (ha-817269)     <rng model='virtio'>
	I0913 23:53:11.597775   25213 main.go:141] libmachine: (ha-817269)       <backend model='random'>/dev/random</backend>
	I0913 23:53:11.597784   25213 main.go:141] libmachine: (ha-817269)     </rng>
	I0913 23:53:11.597791   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597798   25213 main.go:141] libmachine: (ha-817269)     
	I0913 23:53:11.597805   25213 main.go:141] libmachine: (ha-817269)   </devices>
	I0913 23:53:11.597810   25213 main.go:141] libmachine: (ha-817269) </domain>
	I0913 23:53:11.597839   25213 main.go:141] libmachine: (ha-817269) 
	I0913 23:53:11.602075   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:d3:1c:ae in network default
	I0913 23:53:11.602702   25213 main.go:141] libmachine: (ha-817269) Ensuring networks are active...
	I0913 23:53:11.602745   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:11.603403   25213 main.go:141] libmachine: (ha-817269) Ensuring network default is active
	I0913 23:53:11.603718   25213 main.go:141] libmachine: (ha-817269) Ensuring network mk-ha-817269 is active
	I0913 23:53:11.604222   25213 main.go:141] libmachine: (ha-817269) Getting domain xml...
	I0913 23:53:11.604841   25213 main.go:141] libmachine: (ha-817269) Creating domain...
	I0913 23:53:12.819930   25213 main.go:141] libmachine: (ha-817269) Waiting to get IP...
	I0913 23:53:12.820703   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:12.821050   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:12.821108   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:12.821048   25236 retry.go:31] will retry after 252.038906ms: waiting for machine to come up
	I0913 23:53:13.074756   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.075365   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.075410   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.075309   25236 retry.go:31] will retry after 321.284859ms: waiting for machine to come up
	I0913 23:53:13.397726   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.398219   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.398243   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.398178   25236 retry.go:31] will retry after 348.399027ms: waiting for machine to come up
	I0913 23:53:13.747829   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:13.748247   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:13.748273   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:13.748201   25236 retry.go:31] will retry after 543.035066ms: waiting for machine to come up
	I0913 23:53:14.292901   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:14.293240   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:14.293266   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:14.293195   25236 retry.go:31] will retry after 627.458273ms: waiting for machine to come up
	I0913 23:53:14.922074   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:14.922439   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:14.922464   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:14.922402   25236 retry.go:31] will retry after 789.588185ms: waiting for machine to come up
	I0913 23:53:15.713440   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:15.713822   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:15.713870   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:15.713783   25236 retry.go:31] will retry after 845.063121ms: waiting for machine to come up
	I0913 23:53:16.560626   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:16.561178   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:16.561209   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:16.561142   25236 retry.go:31] will retry after 912.014634ms: waiting for machine to come up
	I0913 23:53:17.474565   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:17.475469   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:17.475500   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:17.475397   25236 retry.go:31] will retry after 1.824124091s: waiting for machine to come up
	I0913 23:53:19.301655   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:19.302297   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:19.302340   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:19.302247   25236 retry.go:31] will retry after 1.738487929s: waiting for machine to come up
	I0913 23:53:21.043153   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:21.043854   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:21.043884   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:21.043772   25236 retry.go:31] will retry after 2.838460047s: waiting for machine to come up
	I0913 23:53:23.885578   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:23.885976   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:23.886006   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:23.885922   25236 retry.go:31] will retry after 2.769913011s: waiting for machine to come up
	I0913 23:53:26.657329   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:26.657688   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find current IP address of domain ha-817269 in network mk-ha-817269
	I0913 23:53:26.657713   25213 main.go:141] libmachine: (ha-817269) DBG | I0913 23:53:26.657642   25236 retry.go:31] will retry after 4.533163335s: waiting for machine to come up
	I0913 23:53:31.192391   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.192864   25213 main.go:141] libmachine: (ha-817269) Found IP for machine: 192.168.39.132
	I0913 23:53:31.192885   25213 main.go:141] libmachine: (ha-817269) Reserving static IP address...
	I0913 23:53:31.192892   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has current primary IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.193278   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find host DHCP lease matching {name: "ha-817269", mac: "52:54:00:ff:63:b0", ip: "192.168.39.132"} in network mk-ha-817269
	I0913 23:53:31.264589   25213 main.go:141] libmachine: (ha-817269) Reserved static IP address: 192.168.39.132
	I0913 23:53:31.264621   25213 main.go:141] libmachine: (ha-817269) DBG | Getting to WaitForSSH function...
	I0913 23:53:31.264628   25213 main.go:141] libmachine: (ha-817269) Waiting for SSH to be available...
	I0913 23:53:31.267119   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:31.267713   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269
	I0913 23:53:31.267739   25213 main.go:141] libmachine: (ha-817269) DBG | unable to find defined IP address of network mk-ha-817269 interface with MAC address 52:54:00:ff:63:b0
	I0913 23:53:31.268014   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH client type: external
	I0913 23:53:31.268039   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa (-rw-------)
	I0913 23:53:31.268086   25213 main.go:141] libmachine: (ha-817269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:53:31.268107   25213 main.go:141] libmachine: (ha-817269) DBG | About to run SSH command:
	I0913 23:53:31.268117   25213 main.go:141] libmachine: (ha-817269) DBG | exit 0
	I0913 23:53:31.271648   25213 main.go:141] libmachine: (ha-817269) DBG | SSH cmd err, output: exit status 255: 
	I0913 23:53:31.271671   25213 main.go:141] libmachine: (ha-817269) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 23:53:31.271680   25213 main.go:141] libmachine: (ha-817269) DBG | command : exit 0
	I0913 23:53:31.271686   25213 main.go:141] libmachine: (ha-817269) DBG | err     : exit status 255
	I0913 23:53:31.271707   25213 main.go:141] libmachine: (ha-817269) DBG | output  : 
	I0913 23:53:34.273846   25213 main.go:141] libmachine: (ha-817269) DBG | Getting to WaitForSSH function...
	I0913 23:53:34.276075   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.276457   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.276487   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.276586   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH client type: external
	I0913 23:53:34.276614   25213 main.go:141] libmachine: (ha-817269) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa (-rw-------)
	I0913 23:53:34.276662   25213 main.go:141] libmachine: (ha-817269) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:53:34.276679   25213 main.go:141] libmachine: (ha-817269) DBG | About to run SSH command:
	I0913 23:53:34.276690   25213 main.go:141] libmachine: (ha-817269) DBG | exit 0
	I0913 23:53:34.403956   25213 main.go:141] libmachine: (ha-817269) DBG | SSH cmd err, output: <nil>: 
	I0913 23:53:34.404198   25213 main.go:141] libmachine: (ha-817269) KVM machine creation complete!
	I0913 23:53:34.404539   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:34.405266   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:34.405464   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:34.405588   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:53:34.405602   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:34.406773   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:53:34.406791   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:53:34.406807   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:53:34.406818   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.408795   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.409115   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.409151   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.409322   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.409481   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.409603   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.409716   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.409897   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.410072   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.410087   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:53:34.519177   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:53:34.519197   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:53:34.519204   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.521830   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.522248   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.522280   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.522421   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.522611   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.522799   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.522891   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.523011   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.523208   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.523226   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:53:34.632549   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:53:34.632637   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:53:34.632644   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:53:34.632652   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.632870   25213 buildroot.go:166] provisioning hostname "ha-817269"
	I0913 23:53:34.632894   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.633088   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.635824   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.636183   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.636209   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.636399   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.636546   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.636680   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.636783   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.636900   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.637092   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.637105   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269 && echo "ha-817269" | sudo tee /etc/hostname
	I0913 23:53:34.758071   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0913 23:53:34.758099   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.761001   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.761542   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.761573   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.761733   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.761956   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.762123   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.762254   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.762386   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:34.762570   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:34.762586   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:53:34.882167   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:53:34.882200   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:53:34.882236   25213 buildroot.go:174] setting up certificates
	I0913 23:53:34.882252   25213 provision.go:84] configureAuth start
	I0913 23:53:34.882263   25213 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0913 23:53:34.882558   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:34.885983   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.886447   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.886476   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.886647   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.889068   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.889616   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.889647   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.889744   25213 provision.go:143] copyHostCerts
	I0913 23:53:34.889790   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:53:34.889826   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:53:34.889833   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:53:34.889909   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:53:34.889993   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:53:34.890014   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:53:34.890021   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:53:34.890050   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:53:34.890089   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:53:34.890105   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:53:34.890111   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:53:34.890135   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:53:34.890178   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269 san=[127.0.0.1 192.168.39.132 ha-817269 localhost minikube]
	I0913 23:53:34.960122   25213 provision.go:177] copyRemoteCerts
	I0913 23:53:34.960189   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:53:34.960211   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:34.963549   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.964287   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:34.964320   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:34.964474   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:34.964703   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:34.964867   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:34.965025   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.050215   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:53:35.050281   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:53:35.076921   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:53:35.077000   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0913 23:53:35.102757   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:53:35.102863   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:53:35.127477   25213 provision.go:87] duration metric: took 245.211667ms to configureAuth
	I0913 23:53:35.127513   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:53:35.127714   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:35.127813   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.130425   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.130728   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.130749   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.131038   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.131252   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.131422   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.131547   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.131689   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:35.131908   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:35.131926   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:53:35.358185   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:53:35.358236   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:53:35.358245   25213 main.go:141] libmachine: (ha-817269) Calling .GetURL
	I0913 23:53:35.359953   25213 main.go:141] libmachine: (ha-817269) DBG | Using libvirt version 6000000
	I0913 23:53:35.362538   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.362813   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.362837   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.363009   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:53:35.363061   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:53:35.363074   25213 client.go:171] duration metric: took 24.276108937s to LocalClient.Create
	I0913 23:53:35.363096   25213 start.go:167] duration metric: took 24.276170063s to libmachine.API.Create "ha-817269"
	I0913 23:53:35.363107   25213 start.go:293] postStartSetup for "ha-817269" (driver="kvm2")
	I0913 23:53:35.363122   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:53:35.363145   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.363425   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:53:35.363461   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.366068   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.366439   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.366467   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.366579   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.366792   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.366925   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.367069   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.454158   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:53:35.458934   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:53:35.458963   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:53:35.459029   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:53:35.459121   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:53:35.459134   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:53:35.459254   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:53:35.469014   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:53:35.493957   25213 start.go:296] duration metric: took 130.832596ms for postStartSetup
	I0913 23:53:35.494005   25213 main.go:141] libmachine: (ha-817269) Calling .GetConfigRaw
	I0913 23:53:35.494587   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:35.497099   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.497422   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.497460   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.497776   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:35.498033   25213 start.go:128] duration metric: took 24.430628809s to createHost
	I0913 23:53:35.498060   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.500703   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.501122   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.501174   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.501414   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.501616   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.501837   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.501983   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.502126   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:53:35.502312   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0913 23:53:35.502323   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:53:35.612315   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271615.591239485
	
	I0913 23:53:35.612338   25213 fix.go:216] guest clock: 1726271615.591239485
	I0913 23:53:35.612345   25213 fix.go:229] Guest: 2024-09-13 23:53:35.591239485 +0000 UTC Remote: 2024-09-13 23:53:35.498047714 +0000 UTC m=+24.541264704 (delta=93.191771ms)
	I0913 23:53:35.612379   25213 fix.go:200] guest clock delta is within tolerance: 93.191771ms
	I0913 23:53:35.612393   25213 start.go:83] releasing machines lock for "ha-817269", held for 24.545066092s
	I0913 23:53:35.612414   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.612654   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:35.614972   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.615244   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.615274   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.615432   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.615990   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.616142   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:35.616256   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:53:35.616308   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.616349   25213 ssh_runner.go:195] Run: cat /version.json
	I0913 23:53:35.616368   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:35.618751   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.618958   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619096   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.619121   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619299   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.619370   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:35.619398   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:35.619603   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:35.619615   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.619809   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:35.619822   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.620039   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.620061   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:35.620201   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:35.732907   25213 ssh_runner.go:195] Run: systemctl --version
	I0913 23:53:35.738704   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:53:35.911858   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:53:35.917837   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:53:35.917904   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:53:35.933787   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:53:35.933817   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:53:35.933876   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:53:35.948182   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:53:35.963525   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:53:35.963578   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:53:35.976683   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:53:35.990088   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:53:36.107297   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:53:36.255450   25213 docker.go:233] disabling docker service ...
	I0913 23:53:36.255511   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:53:36.272033   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:53:36.285254   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:53:36.401144   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:53:36.513483   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:53:36.527278   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:53:36.545449   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:53:36.545504   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.556091   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:53:36.556150   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.566368   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.576307   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.586278   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:53:36.596436   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.606372   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.622740   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:53:36.632542   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:53:36.641527   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:53:36.641603   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:53:36.654880   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:53:36.663948   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:53:36.776355   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:53:36.864463   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:53:36.864547   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:53:36.868817   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:53:36.868871   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:53:36.872311   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:53:36.914551   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:53:36.914633   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:53:36.941104   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:53:36.971114   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:53:36.972363   25213 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0913 23:53:36.974989   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:36.975289   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:36.975355   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:36.975572   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:53:36.979264   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:53:36.991147   25213 kubeadm.go:883] updating cluster {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:53:36.991246   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:36.991285   25213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:53:37.026797   25213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0913 23:53:37.026870   25213 ssh_runner.go:195] Run: which lz4
	I0913 23:53:37.030818   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0913 23:53:37.030925   25213 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 23:53:37.034775   25213 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 23:53:37.034802   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0913 23:53:38.266803   25213 crio.go:462] duration metric: took 1.235912846s to copy over tarball
	I0913 23:53:38.266884   25213 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 23:53:40.230553   25213 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.963636138s)
	I0913 23:53:40.230586   25213 crio.go:469] duration metric: took 1.963756576s to extract the tarball
	I0913 23:53:40.230593   25213 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 23:53:40.265815   25213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0913 23:53:40.306468   25213 crio.go:514] all images are preloaded for cri-o runtime.
	I0913 23:53:40.306488   25213 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:53:40.306495   25213 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.31.1 crio true true} ...
	I0913 23:53:40.306599   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:53:40.306662   25213 ssh_runner.go:195] Run: crio config
	I0913 23:53:40.351105   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:40.351125   25213 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 23:53:40.351134   25213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:53:40.351153   25213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-817269 NodeName:ha-817269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:53:40.351279   25213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-817269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:53:40.351300   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:53:40.351344   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:53:40.366350   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:53:40.366447   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:53:40.366496   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:53:40.375568   25213 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:53:40.375631   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0913 23:53:40.384270   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0913 23:53:40.399072   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:53:40.414108   25213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 23:53:40.428894   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0913 23:53:40.444102   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:53:40.447494   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:53:40.459630   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:53:40.592810   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:53:40.608621   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.132
	I0913 23:53:40.608648   25213 certs.go:194] generating shared ca certs ...
	I0913 23:53:40.608664   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:40.608849   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:53:40.608898   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:53:40.608914   25213 certs.go:256] generating profile certs ...
	I0913 23:53:40.608974   25213 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:53:40.608993   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt with IP's: []
	I0913 23:53:41.075182   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt ...
	I0913 23:53:41.075218   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt: {Name:mk37663b0bb79f3cd029e72ea8174a7a1a581895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.075407   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key ...
	I0913 23:53:41.075421   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key: {Name:mk3478584ca6bdcaa18e4b2b10357b0ee027b48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.075503   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b
	I0913 23:53:41.075523   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.254]
	I0913 23:53:41.146490   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b ...
	I0913 23:53:41.146522   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b: {Name:mkfe3a73348ddd87edbc5a6cabc554c4610640b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.146692   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b ...
	I0913 23:53:41.146706   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b: {Name:mk1cbe766e5f2a877a631cfb2d64d99e621e4f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.146783   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.3517de7b -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:53:41.146895   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.3517de7b -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:53:41.146959   25213 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:53:41.146977   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt with IP's: []
	I0913 23:53:41.216992   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt ...
	I0913 23:53:41.217023   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt: {Name:mk0de44fc0ae0c22325d0da288904b6579d9cf32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.217185   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key ...
	I0913 23:53:41.217197   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key: {Name:mkc555d16147bb2f803744ff0236a4697e3c2ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:41.217278   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:53:41.217298   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:53:41.217314   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:53:41.217331   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:53:41.217346   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:53:41.217361   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:53:41.217379   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:53:41.217393   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:53:41.217446   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:53:41.217487   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:53:41.217504   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:53:41.217533   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:53:41.217566   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:53:41.217591   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:53:41.217636   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:53:41.217677   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.217694   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.217708   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.218292   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:53:41.242148   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:53:41.263906   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:53:41.285294   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:53:41.307736   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0913 23:53:41.329593   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:53:41.354179   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:53:41.379968   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:53:41.406492   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:53:41.428642   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:53:41.450730   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:53:41.474234   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:53:41.489695   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:53:41.495118   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:53:41.505319   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.509425   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.509471   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:53:41.514802   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:53:41.524637   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:53:41.534301   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.538253   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.538295   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:53:41.543478   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:53:41.553327   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:53:41.562956   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.567018   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.567074   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:53:41.572317   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:53:41.582216   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:53:41.585940   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:53:41.585997   25213 kubeadm.go:392] StartCluster: {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:53:41.586064   25213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0913 23:53:41.586124   25213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0913 23:53:41.621256   25213 cri.go:89] found id: ""
	I0913 23:53:41.621326   25213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:53:41.630554   25213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:53:41.639193   25213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:53:41.647749   25213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:53:41.647766   25213 kubeadm.go:157] found existing configuration files:
	
	I0913 23:53:41.647812   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:53:41.656046   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:53:41.656099   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:53:41.664760   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:53:41.673925   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:53:41.673986   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:53:41.683704   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:53:41.693080   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:53:41.693153   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:53:41.702617   25213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:53:41.711400   25213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:53:41.711451   25213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:53:41.721031   25213 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:53:41.828442   25213 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:53:41.828728   25213 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:53:41.938049   25213 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:53:41.938168   25213 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:53:41.938296   25213 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:53:41.947216   25213 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:53:41.989200   25213 out.go:235]   - Generating certificates and keys ...
	I0913 23:53:41.989307   25213 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:53:41.989402   25213 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:53:42.288660   25213 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:53:42.544220   25213 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:53:42.813284   25213 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:53:42.949393   25213 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:53:43.132818   25213 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:53:43.133008   25213 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-817269 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0913 23:53:43.259724   25213 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:53:43.259961   25213 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-817269 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0913 23:53:43.610264   25213 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:53:43.726166   25213 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:53:43.940296   25213 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:53:43.940368   25213 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:53:44.076855   25213 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:53:44.294961   25213 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:53:44.360663   25213 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:53:44.488776   25213 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:53:44.595267   25213 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:53:44.595948   25213 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:53:44.599411   25213 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:53:44.640864   25213 out.go:235]   - Booting up control plane ...
	I0913 23:53:44.641031   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:53:44.641134   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:53:44.641222   25213 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:53:44.641384   25213 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:53:44.641507   25213 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:53:44.641592   25213 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:53:44.757032   25213 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:53:44.757204   25213 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:53:45.759248   25213 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002499871s
	I0913 23:53:45.759374   25213 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:53:51.432813   25213 kubeadm.go:310] [api-check] The API server is healthy after 5.676702105s
	I0913 23:53:51.444631   25213 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:53:51.464639   25213 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:53:51.991895   25213 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:53:51.992115   25213 kubeadm.go:310] [mark-control-plane] Marking the node ha-817269 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:53:52.005034   25213 kubeadm.go:310] [bootstrap-token] Using token: cl4itr.u5psq9zksjfm5ip6
	I0913 23:53:52.006623   25213 out.go:235]   - Configuring RBAC rules ...
	I0913 23:53:52.006754   25213 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:53:52.017234   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:53:52.025927   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:53:52.029244   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:53:52.037816   25213 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:53:52.041421   25213 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:53:52.057813   25213 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:53:52.295747   25213 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:53:52.839601   25213 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:53:52.840653   25213 kubeadm.go:310] 
	I0913 23:53:52.840747   25213 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:53:52.840758   25213 kubeadm.go:310] 
	I0913 23:53:52.840869   25213 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:53:52.840883   25213 kubeadm.go:310] 
	I0913 23:53:52.840919   25213 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:53:52.841006   25213 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:53:52.841061   25213 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:53:52.841069   25213 kubeadm.go:310] 
	I0913 23:53:52.841115   25213 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:53:52.841123   25213 kubeadm.go:310] 
	I0913 23:53:52.841160   25213 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:53:52.841167   25213 kubeadm.go:310] 
	I0913 23:53:52.841213   25213 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:53:52.841290   25213 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:53:52.841354   25213 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:53:52.841361   25213 kubeadm.go:310] 
	I0913 23:53:52.841430   25213 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:53:52.841502   25213 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:53:52.841509   25213 kubeadm.go:310] 
	I0913 23:53:52.841598   25213 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cl4itr.u5psq9zksjfm5ip6 \
	I0913 23:53:52.841747   25213 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0913 23:53:52.841785   25213 kubeadm.go:310] 	--control-plane 
	I0913 23:53:52.841791   25213 kubeadm.go:310] 
	I0913 23:53:52.841935   25213 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:53:52.841949   25213 kubeadm.go:310] 
	I0913 23:53:52.842025   25213 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cl4itr.u5psq9zksjfm5ip6 \
	I0913 23:53:52.842167   25213 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0913 23:53:52.843211   25213 kubeadm.go:310] W0913 23:53:41.808739     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:53:52.843530   25213 kubeadm.go:310] W0913 23:53:41.810468     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:53:52.843689   25213 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:53:52.843710   25213 cni.go:84] Creating CNI manager for ""
	I0913 23:53:52.843717   25213 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 23:53:52.845718   25213 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0913 23:53:52.847496   25213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0913 23:53:52.852667   25213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0913 23:53:52.852690   25213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0913 23:53:52.872920   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0913 23:53:53.256393   25213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:53:53.256456   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:53.256509   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269 minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=true
	I0913 23:53:53.410594   25213 ops.go:34] apiserver oom_adj: -16
	I0913 23:53:53.410722   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:53.910931   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:54.411234   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:54.910919   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:55.410801   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:55.911353   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:56.410987   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:53:56.504995   25213 kubeadm.go:1113] duration metric: took 3.248599134s to wait for elevateKubeSystemPrivileges
	I0913 23:53:56.505051   25213 kubeadm.go:394] duration metric: took 14.919056274s to StartCluster
	I0913 23:53:56.505070   25213 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:56.505153   25213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:56.505950   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:53:56.506209   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:53:56.506204   25213 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:56.506233   25213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 23:53:56.506302   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:53:56.506314   25213 addons.go:69] Setting storage-provisioner=true in profile "ha-817269"
	I0913 23:53:56.506316   25213 addons.go:69] Setting default-storageclass=true in profile "ha-817269"
	I0913 23:53:56.506330   25213 addons.go:234] Setting addon storage-provisioner=true in "ha-817269"
	I0913 23:53:56.506331   25213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-817269"
	I0913 23:53:56.506356   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:53:56.506436   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:56.506766   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.506779   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.506811   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.506811   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.521882   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0913 23:53:56.521996   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0913 23:53:56.522334   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.522410   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.522846   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.522873   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.522872   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.522927   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.523287   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.523330   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.523511   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.523844   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.523886   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.525584   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:53:56.525917   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 23:53:56.526462   25213 cert_rotation.go:140] Starting client certificate rotation controller
	I0913 23:53:56.526745   25213 addons.go:234] Setting addon default-storageclass=true in "ha-817269"
	I0913 23:53:56.526790   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:53:56.527168   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.527216   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.539201   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0913 23:53:56.539644   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.540159   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.540183   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.540568   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.540791   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.541966   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0913 23:53:56.542386   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.542507   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:56.542857   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.542879   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.543221   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.543822   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:56.543869   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:56.544554   25213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:53:56.545739   25213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:53:56.545769   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:53:56.545792   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:56.549150   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.549676   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:56.549698   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.549859   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:56.550033   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:56.550171   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:56.550307   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:56.558876   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45083
	I0913 23:53:56.559374   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:56.559864   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:56.559884   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:56.560214   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:56.560418   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:53:56.562125   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:53:56.562374   25213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:53:56.562393   25213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:53:56.562410   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:53:56.565212   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.565614   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:53:56.565643   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:53:56.565871   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:53:56.566052   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:53:56.566201   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:53:56.566353   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:53:56.704825   25213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:53:56.707339   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:53:56.717097   25213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:53:57.609799   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.609820   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.609880   25213 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 23:53:57.609900   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.609909   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610103   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610121   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610131   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.610138   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610212   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610226   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610239   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.610246   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.610406   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610425   25213 main.go:141] libmachine: (ha-817269) DBG | Closing plugin on server side
	I0913 23:53:57.610426   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610524   25213 main.go:141] libmachine: (ha-817269) DBG | Closing plugin on server side
	I0913 23:53:57.610558   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.610580   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.610667   25213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 23:53:57.610686   25213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 23:53:57.610764   25213 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0913 23:53:57.610769   25213 round_trippers.go:469] Request Headers:
	I0913 23:53:57.610777   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:53:57.610781   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:53:57.629053   25213 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0913 23:53:57.629634   25213 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0913 23:53:57.629652   25213 round_trippers.go:469] Request Headers:
	I0913 23:53:57.629662   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:53:57.629668   25213 round_trippers.go:473]     Content-Type: application/json
	I0913 23:53:57.629671   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:53:57.633924   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:53:57.634075   25213 main.go:141] libmachine: Making call to close driver server
	I0913 23:53:57.634086   25213 main.go:141] libmachine: (ha-817269) Calling .Close
	I0913 23:53:57.634387   25213 main.go:141] libmachine: Successfully made call to close driver server
	I0913 23:53:57.634405   25213 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 23:53:57.637066   25213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0913 23:53:57.638474   25213 addons.go:510] duration metric: took 1.132245825s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0913 23:53:57.638515   25213 start.go:246] waiting for cluster config update ...
	I0913 23:53:57.638531   25213 start.go:255] writing updated cluster config ...
	I0913 23:53:57.640741   25213 out.go:201] 
	I0913 23:53:57.642452   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:53:57.642531   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:57.644339   25213 out.go:177] * Starting "ha-817269-m02" control-plane node in "ha-817269" cluster
	I0913 23:53:57.645864   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:53:57.645892   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:53:57.645992   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:53:57.646004   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:53:57.646069   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:53:57.646234   25213 start.go:360] acquireMachinesLock for ha-817269-m02: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:53:57.646282   25213 start.go:364] duration metric: took 25.679µs to acquireMachinesLock for "ha-817269-m02"
	I0913 23:53:57.646299   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:53:57.646359   25213 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0913 23:53:57.647723   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:53:57.647822   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:53:57.647859   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:53:57.662310   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0913 23:53:57.662772   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:53:57.663373   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:53:57.663401   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:53:57.663696   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:53:57.663905   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:53:57.664087   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:53:57.664297   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:53:57.664370   25213 client.go:168] LocalClient.Create starting
	I0913 23:53:57.664407   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:53:57.664452   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:57.664471   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:57.664626   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:53:57.664677   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:53:57.664695   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:53:57.664793   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:53:57.664820   25213 main.go:141] libmachine: (ha-817269-m02) Calling .PreCreateCheck
	I0913 23:53:57.665030   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:53:57.665580   25213 main.go:141] libmachine: Creating machine...
	I0913 23:53:57.665599   25213 main.go:141] libmachine: (ha-817269-m02) Calling .Create
	I0913 23:53:57.665753   25213 main.go:141] libmachine: (ha-817269-m02) Creating KVM machine...
	I0913 23:53:57.667798   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found existing default KVM network
	I0913 23:53:57.668051   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found existing private KVM network mk-ha-817269
	I0913 23:53:57.668203   25213 main.go:141] libmachine: (ha-817269-m02) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 ...
	I0913 23:53:57.668228   25213 main.go:141] libmachine: (ha-817269-m02) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:53:57.668330   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:57.668206   25578 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:57.668424   25213 main.go:141] libmachine: (ha-817269-m02) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:53:57.910101   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:57.909941   25578 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa...
	I0913 23:53:58.012058   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:58.011951   25578 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/ha-817269-m02.rawdisk...
	I0913 23:53:58.012088   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Writing magic tar header
	I0913 23:53:58.012097   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Writing SSH key tar header
	I0913 23:53:58.012106   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:58.012056   25578 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 ...
	I0913 23:53:58.012183   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02
	I0913 23:53:58.012209   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:53:58.012222   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02 (perms=drwx------)
	I0913 23:53:58.012231   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:53:58.012240   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:53:58.012249   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:53:58.012255   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:53:58.012267   25213 main.go:141] libmachine: (ha-817269-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:53:58.012276   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:53:58.012292   25213 main.go:141] libmachine: (ha-817269-m02) Creating domain...
	I0913 23:53:58.012301   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:53:58.012315   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:53:58.012323   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:53:58.012337   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Checking permissions on dir: /home
	I0913 23:53:58.012345   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Skipping /home - not owner
	I0913 23:53:58.013245   25213 main.go:141] libmachine: (ha-817269-m02) define libvirt domain using xml: 
	I0913 23:53:58.013264   25213 main.go:141] libmachine: (ha-817269-m02) <domain type='kvm'>
	I0913 23:53:58.013280   25213 main.go:141] libmachine: (ha-817269-m02)   <name>ha-817269-m02</name>
	I0913 23:53:58.013287   25213 main.go:141] libmachine: (ha-817269-m02)   <memory unit='MiB'>2200</memory>
	I0913 23:53:58.013299   25213 main.go:141] libmachine: (ha-817269-m02)   <vcpu>2</vcpu>
	I0913 23:53:58.013306   25213 main.go:141] libmachine: (ha-817269-m02)   <features>
	I0913 23:53:58.013317   25213 main.go:141] libmachine: (ha-817269-m02)     <acpi/>
	I0913 23:53:58.013323   25213 main.go:141] libmachine: (ha-817269-m02)     <apic/>
	I0913 23:53:58.013333   25213 main.go:141] libmachine: (ha-817269-m02)     <pae/>
	I0913 23:53:58.013341   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013352   25213 main.go:141] libmachine: (ha-817269-m02)   </features>
	I0913 23:53:58.013362   25213 main.go:141] libmachine: (ha-817269-m02)   <cpu mode='host-passthrough'>
	I0913 23:53:58.013372   25213 main.go:141] libmachine: (ha-817269-m02)   
	I0913 23:53:58.013379   25213 main.go:141] libmachine: (ha-817269-m02)   </cpu>
	I0913 23:53:58.013386   25213 main.go:141] libmachine: (ha-817269-m02)   <os>
	I0913 23:53:58.013396   25213 main.go:141] libmachine: (ha-817269-m02)     <type>hvm</type>
	I0913 23:53:58.013404   25213 main.go:141] libmachine: (ha-817269-m02)     <boot dev='cdrom'/>
	I0913 23:53:58.013414   25213 main.go:141] libmachine: (ha-817269-m02)     <boot dev='hd'/>
	I0913 23:53:58.013422   25213 main.go:141] libmachine: (ha-817269-m02)     <bootmenu enable='no'/>
	I0913 23:53:58.013430   25213 main.go:141] libmachine: (ha-817269-m02)   </os>
	I0913 23:53:58.013437   25213 main.go:141] libmachine: (ha-817269-m02)   <devices>
	I0913 23:53:58.013447   25213 main.go:141] libmachine: (ha-817269-m02)     <disk type='file' device='cdrom'>
	I0913 23:53:58.013462   25213 main.go:141] libmachine: (ha-817269-m02)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/boot2docker.iso'/>
	I0913 23:53:58.013472   25213 main.go:141] libmachine: (ha-817269-m02)       <target dev='hdc' bus='scsi'/>
	I0913 23:53:58.013482   25213 main.go:141] libmachine: (ha-817269-m02)       <readonly/>
	I0913 23:53:58.013490   25213 main.go:141] libmachine: (ha-817269-m02)     </disk>
	I0913 23:53:58.013501   25213 main.go:141] libmachine: (ha-817269-m02)     <disk type='file' device='disk'>
	I0913 23:53:58.013514   25213 main.go:141] libmachine: (ha-817269-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:53:58.013526   25213 main.go:141] libmachine: (ha-817269-m02)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/ha-817269-m02.rawdisk'/>
	I0913 23:53:58.013537   25213 main.go:141] libmachine: (ha-817269-m02)       <target dev='hda' bus='virtio'/>
	I0913 23:53:58.013550   25213 main.go:141] libmachine: (ha-817269-m02)     </disk>
	I0913 23:53:58.013560   25213 main.go:141] libmachine: (ha-817269-m02)     <interface type='network'>
	I0913 23:53:58.013568   25213 main.go:141] libmachine: (ha-817269-m02)       <source network='mk-ha-817269'/>
	I0913 23:53:58.013575   25213 main.go:141] libmachine: (ha-817269-m02)       <model type='virtio'/>
	I0913 23:53:58.013584   25213 main.go:141] libmachine: (ha-817269-m02)     </interface>
	I0913 23:53:58.013591   25213 main.go:141] libmachine: (ha-817269-m02)     <interface type='network'>
	I0913 23:53:58.013602   25213 main.go:141] libmachine: (ha-817269-m02)       <source network='default'/>
	I0913 23:53:58.013612   25213 main.go:141] libmachine: (ha-817269-m02)       <model type='virtio'/>
	I0913 23:53:58.013619   25213 main.go:141] libmachine: (ha-817269-m02)     </interface>
	I0913 23:53:58.013629   25213 main.go:141] libmachine: (ha-817269-m02)     <serial type='pty'>
	I0913 23:53:58.013637   25213 main.go:141] libmachine: (ha-817269-m02)       <target port='0'/>
	I0913 23:53:58.013646   25213 main.go:141] libmachine: (ha-817269-m02)     </serial>
	I0913 23:53:58.013654   25213 main.go:141] libmachine: (ha-817269-m02)     <console type='pty'>
	I0913 23:53:58.013664   25213 main.go:141] libmachine: (ha-817269-m02)       <target type='serial' port='0'/>
	I0913 23:53:58.013674   25213 main.go:141] libmachine: (ha-817269-m02)     </console>
	I0913 23:53:58.013683   25213 main.go:141] libmachine: (ha-817269-m02)     <rng model='virtio'>
	I0913 23:53:58.013692   25213 main.go:141] libmachine: (ha-817269-m02)       <backend model='random'>/dev/random</backend>
	I0913 23:53:58.013701   25213 main.go:141] libmachine: (ha-817269-m02)     </rng>
	I0913 23:53:58.013708   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013717   25213 main.go:141] libmachine: (ha-817269-m02)     
	I0913 23:53:58.013724   25213 main.go:141] libmachine: (ha-817269-m02)   </devices>
	I0913 23:53:58.013733   25213 main.go:141] libmachine: (ha-817269-m02) </domain>
	I0913 23:53:58.013745   25213 main.go:141] libmachine: (ha-817269-m02) 
	I0913 23:53:58.020466   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:0a:ce:4e in network default
	I0913 23:53:58.021021   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring networks are active...
	I0913 23:53:58.021046   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:58.021779   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring network default is active
	I0913 23:53:58.022070   25213 main.go:141] libmachine: (ha-817269-m02) Ensuring network mk-ha-817269 is active
	I0913 23:53:58.022524   25213 main.go:141] libmachine: (ha-817269-m02) Getting domain xml...
	I0913 23:53:58.023156   25213 main.go:141] libmachine: (ha-817269-m02) Creating domain...
	I0913 23:53:59.258990   25213 main.go:141] libmachine: (ha-817269-m02) Waiting to get IP...
	I0913 23:53:59.259884   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.260305   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.260339   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.260293   25578 retry.go:31] will retry after 252.903714ms: waiting for machine to come up
	I0913 23:53:59.514798   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.515250   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.515284   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.515196   25578 retry.go:31] will retry after 243.975614ms: waiting for machine to come up
	I0913 23:53:59.760450   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:53:59.760896   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:53:59.760920   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:53:59.760863   25578 retry.go:31] will retry after 446.918322ms: waiting for machine to come up
	I0913 23:54:00.209499   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:00.209959   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:00.209984   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:00.209913   25578 retry.go:31] will retry after 371.644867ms: waiting for machine to come up
	I0913 23:54:00.583498   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:00.584074   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:00.584102   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:00.584022   25578 retry.go:31] will retry after 602.57541ms: waiting for machine to come up
	I0913 23:54:01.187665   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:01.188097   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:01.188134   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:01.188003   25578 retry.go:31] will retry after 636.328676ms: waiting for machine to come up
	I0913 23:54:01.825787   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:01.826208   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:01.826235   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:01.826162   25578 retry.go:31] will retry after 935.123574ms: waiting for machine to come up
	I0913 23:54:02.763341   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:02.763849   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:02.763876   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:02.763807   25578 retry.go:31] will retry after 1.434666123s: waiting for machine to come up
	I0913 23:54:04.200402   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:04.200901   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:04.200933   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:04.200804   25578 retry.go:31] will retry after 1.248828258s: waiting for machine to come up
	I0913 23:54:05.451314   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:05.451700   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:05.451730   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:05.451613   25578 retry.go:31] will retry after 1.935798889s: waiting for machine to come up
	I0913 23:54:07.389918   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:07.390398   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:07.390427   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:07.390347   25578 retry.go:31] will retry after 2.345270301s: waiting for machine to come up
	I0913 23:54:09.737093   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:09.737524   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:09.737545   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:09.737480   25578 retry.go:31] will retry after 2.860762897s: waiting for machine to come up
	I0913 23:54:12.601730   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:12.602285   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:12.602311   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:12.602216   25578 retry.go:31] will retry after 4.41059942s: waiting for machine to come up
	I0913 23:54:17.017065   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:17.017467   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find current IP address of domain ha-817269-m02 in network mk-ha-817269
	I0913 23:54:17.017488   25213 main.go:141] libmachine: (ha-817269-m02) DBG | I0913 23:54:17.017432   25578 retry.go:31] will retry after 4.935665555s: waiting for machine to come up
	I0913 23:54:21.956937   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:21.957630   25213 main.go:141] libmachine: (ha-817269-m02) Found IP for machine: 192.168.39.6
	I0913 23:54:21.957662   25213 main.go:141] libmachine: (ha-817269-m02) Reserving static IP address...
	I0913 23:54:21.957676   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has current primary IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:21.958107   25213 main.go:141] libmachine: (ha-817269-m02) DBG | unable to find host DHCP lease matching {name: "ha-817269-m02", mac: "52:54:00:12:e8:40", ip: "192.168.39.6"} in network mk-ha-817269
	I0913 23:54:22.033248   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Getting to WaitForSSH function...
	I0913 23:54:22.033276   25213 main.go:141] libmachine: (ha-817269-m02) Reserved static IP address: 192.168.39.6
	I0913 23:54:22.033299   25213 main.go:141] libmachine: (ha-817269-m02) Waiting for SSH to be available...
	I0913 23:54:22.035657   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.036155   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.036187   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.036318   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using SSH client type: external
	I0913 23:54:22.036338   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa (-rw-------)
	I0913 23:54:22.036369   25213 main.go:141] libmachine: (ha-817269-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:54:22.036380   25213 main.go:141] libmachine: (ha-817269-m02) DBG | About to run SSH command:
	I0913 23:54:22.036394   25213 main.go:141] libmachine: (ha-817269-m02) DBG | exit 0
	I0913 23:54:22.163961   25213 main.go:141] libmachine: (ha-817269-m02) DBG | SSH cmd err, output: <nil>: 
	I0913 23:54:22.164275   25213 main.go:141] libmachine: (ha-817269-m02) KVM machine creation complete!
	I0913 23:54:22.164640   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:54:22.165156   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:22.165321   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:22.165452   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:54:22.165463   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0913 23:54:22.166719   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:54:22.166735   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:54:22.166744   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:54:22.166752   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.169014   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.169359   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.169401   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.169718   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.169900   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.170017   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.170113   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.170283   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.170503   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.170531   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:54:22.291327   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:54:22.291350   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:54:22.291357   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.294488   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.294844   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.294872   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.295093   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.295321   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.295494   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.295631   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.295818   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.295995   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.296006   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:54:22.408849   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:54:22.408931   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:54:22.408945   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:54:22.408958   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.409228   25213 buildroot.go:166] provisioning hostname "ha-817269-m02"
	I0913 23:54:22.409257   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.409446   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.412134   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.412515   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.412543   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.412679   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.412850   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.413006   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.413149   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.413320   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.413505   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.413516   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269-m02 && echo "ha-817269-m02" | sudo tee /etc/hostname
	I0913 23:54:22.537581   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269-m02
	
	I0913 23:54:22.537610   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.540656   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.541295   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.541379   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.541682   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.541925   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.542136   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.542322   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.542488   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.542692   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.542711   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:54:22.664074   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:54:22.664106   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:54:22.664122   25213 buildroot.go:174] setting up certificates
	I0913 23:54:22.664132   25213 provision.go:84] configureAuth start
	I0913 23:54:22.664140   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetMachineName
	I0913 23:54:22.664402   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:22.667256   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.667697   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.667728   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.667924   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.670069   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.670397   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.670423   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.670585   25213 provision.go:143] copyHostCerts
	I0913 23:54:22.670622   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:54:22.670670   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:54:22.670683   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:54:22.670858   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:54:22.670970   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:54:22.670996   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:54:22.671006   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:54:22.671048   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:54:22.671108   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:54:22.671132   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:54:22.671141   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:54:22.671171   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:54:22.671233   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269-m02 san=[127.0.0.1 192.168.39.6 ha-817269-m02 localhost minikube]
	I0913 23:54:22.772722   25213 provision.go:177] copyRemoteCerts
	I0913 23:54:22.772797   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:54:22.772827   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.775563   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.775934   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.775959   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.776109   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.776280   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.776427   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.776581   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:22.862036   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:54:22.862119   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 23:54:22.886278   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:54:22.886364   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:54:22.910017   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:54:22.910086   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:54:22.933497   25213 provision.go:87] duration metric: took 269.353109ms to configureAuth
	I0913 23:54:22.933532   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:54:22.933737   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:22.933895   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:22.936636   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.936886   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:22.936917   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:22.937096   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:22.937292   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.937466   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:22.937637   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:22.937868   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:22.938039   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:22.938053   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:54:23.153804   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:54:23.153831   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:54:23.153844   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetURL
	I0913 23:54:23.155077   25213 main.go:141] libmachine: (ha-817269-m02) DBG | Using libvirt version 6000000
	I0913 23:54:23.157152   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.157475   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.157508   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.157651   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:54:23.157664   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:54:23.157670   25213 client.go:171] duration metric: took 25.493288714s to LocalClient.Create
	I0913 23:54:23.157695   25213 start.go:167] duration metric: took 25.493399423s to libmachine.API.Create "ha-817269"
	I0913 23:54:23.157704   25213 start.go:293] postStartSetup for "ha-817269-m02" (driver="kvm2")
	I0913 23:54:23.157714   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:54:23.157730   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.157948   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:54:23.157969   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.160140   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.160440   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.160463   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.160641   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.160816   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.160952   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.161080   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.245507   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:54:23.249278   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:54:23.249304   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:54:23.249379   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:54:23.249482   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:54:23.249494   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:54:23.249605   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:54:23.258693   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:54:23.281493   25213 start.go:296] duration metric: took 123.774542ms for postStartSetup
	I0913 23:54:23.281550   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetConfigRaw
	I0913 23:54:23.282117   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:23.284610   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.284952   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.284980   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.285220   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:54:23.285417   25213 start.go:128] duration metric: took 25.639046852s to createHost
	I0913 23:54:23.285439   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.287718   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.288121   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.288148   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.288297   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.288492   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.288656   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.288821   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.288951   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:54:23.289154   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0913 23:54:23.289165   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:54:23.400269   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271663.360120305
	
	I0913 23:54:23.400307   25213 fix.go:216] guest clock: 1726271663.360120305
	I0913 23:54:23.400320   25213 fix.go:229] Guest: 2024-09-13 23:54:23.360120305 +0000 UTC Remote: 2024-09-13 23:54:23.285428402 +0000 UTC m=+72.328645296 (delta=74.691903ms)
	I0913 23:54:23.400335   25213 fix.go:200] guest clock delta is within tolerance: 74.691903ms
	I0913 23:54:23.400341   25213 start.go:83] releasing machines lock for "ha-817269-m02", held for 25.754049851s
	I0913 23:54:23.400363   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.400609   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:23.403214   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.403547   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.403575   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.405930   25213 out.go:177] * Found network options:
	I0913 23:54:23.407210   25213 out.go:177]   - NO_PROXY=192.168.39.132
	W0913 23:54:23.408403   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:54:23.408430   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.408985   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.409163   25213 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0913 23:54:23.409286   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:54:23.409330   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	W0913 23:54:23.409342   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:54:23.409408   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:54:23.409429   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0913 23:54:23.412238   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412263   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412647   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.412677   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.412817   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.412821   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:23.412840   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:23.413006   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.413010   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0913 23:54:23.413163   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.413174   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0913 23:54:23.413307   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.413343   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0913 23:54:23.413501   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0913 23:54:23.645752   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:54:23.652154   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:54:23.652226   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:54:23.668085   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:54:23.668109   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:54:23.668162   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:54:23.683627   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:54:23.697419   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:54:23.697474   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:54:23.711521   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:54:23.725820   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:54:23.838265   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:54:23.994503   25213 docker.go:233] disabling docker service ...
	I0913 23:54:23.994584   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:54:24.008957   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:54:24.021851   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:54:24.157548   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:54:24.268397   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:54:24.281910   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:54:24.298933   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:54:24.298991   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.309300   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:54:24.309362   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.319549   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.329711   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.340063   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:54:24.350714   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.362073   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.378622   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:54:24.388538   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:54:24.398162   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:54:24.398216   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:54:24.411843   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:54:24.422163   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:24.538495   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:54:24.631278   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:54:24.631354   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:54:24.636266   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:54:24.636315   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:54:24.639869   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:54:24.679035   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:54:24.679104   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:54:24.710066   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:54:24.744990   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:54:24.746440   25213 out.go:177]   - env NO_PROXY=192.168.39.132
	I0913 23:54:24.747886   25213 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0913 23:54:24.750572   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:24.750888   25213 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:54:11 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0913 23:54:24.750913   25213 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0913 23:54:24.751116   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:54:24.755119   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:54:24.767307   25213 mustload.go:65] Loading cluster: ha-817269
	I0913 23:54:24.767500   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:24.767733   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:24.767777   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:24.782693   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0913 23:54:24.783111   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:24.783584   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:24.783603   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:24.783942   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:24.784120   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:54:24.785645   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:54:24.785918   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:24.785950   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:24.801316   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0913 23:54:24.801721   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:24.802150   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:24.802172   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:24.802472   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:24.802667   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:54:24.802792   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.6
	I0913 23:54:24.802804   25213 certs.go:194] generating shared ca certs ...
	I0913 23:54:24.802821   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:24.802933   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:54:24.802970   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:54:24.802978   25213 certs.go:256] generating profile certs ...
	I0913 23:54:24.803050   25213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:54:24.803075   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df
	I0913 23:54:24.803088   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.254]
	I0913 23:54:25.167222   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df ...
	I0913 23:54:25.167258   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df: {Name:mk007159b7cd7eebf1ca7347528c8f29aa9b052c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:25.167418   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df ...
	I0913 23:54:25.167431   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df: {Name:mk268c398dae4c1095b1df23597f8dfb5196fe24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:54:25.167503   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.146203df -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:54:25.167636   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.146203df -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:54:25.167757   25213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:54:25.167771   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:54:25.167798   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:54:25.167814   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:54:25.167837   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:54:25.167854   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:54:25.167870   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:54:25.167890   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:54:25.167902   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:54:25.167949   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:54:25.167978   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:54:25.167986   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:54:25.168005   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:54:25.168026   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:54:25.168046   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:54:25.168080   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:54:25.168104   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.168120   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.168133   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.168160   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:54:25.171250   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:25.171670   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:54:25.171704   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:25.171843   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:54:25.172033   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:54:25.172176   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:54:25.172323   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:54:25.248199   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 23:54:25.252951   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 23:54:25.264893   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 23:54:25.268977   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 23:54:25.278801   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 23:54:25.282643   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 23:54:25.292306   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 23:54:25.296176   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 23:54:25.307298   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 23:54:25.311893   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 23:54:25.321687   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 23:54:25.325500   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 23:54:25.338339   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:54:25.362429   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:54:25.385639   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:54:25.410668   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:54:25.437559   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0913 23:54:25.462124   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:54:25.486142   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:54:25.511331   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:54:25.535128   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:54:25.561350   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:54:25.584473   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:54:25.609815   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 23:54:25.628611   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 23:54:25.646465   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 23:54:25.662249   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 23:54:25.679110   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 23:54:25.695420   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 23:54:25.711272   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 23:54:25.727712   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:54:25.733253   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:54:25.744025   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.748785   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.748856   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:54:25.756605   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:54:25.767726   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:54:25.779862   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.784596   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.784652   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:54:25.790148   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:54:25.800954   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:54:25.811901   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.816453   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.816505   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:54:25.821966   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:54:25.832543   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:54:25.836315   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:54:25.836366   25213 kubeadm.go:934] updating node {m02 192.168.39.6 8443 v1.31.1 crio true true} ...
	I0913 23:54:25.836443   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:54:25.836465   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:54:25.836503   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:54:25.850858   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:54:25.850926   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:54:25.850986   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:54:25.860284   25213 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 23:54:25.860351   25213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 23:54:25.869346   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 23:54:25.869373   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:54:25.869416   25213 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0913 23:54:25.869423   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:54:25.869446   25213 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0913 23:54:25.873299   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 23:54:25.873323   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 23:54:26.732942   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:54:26.733020   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:54:26.737557   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 23:54:26.737593   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 23:54:27.227538   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:54:27.242458   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:54:27.242558   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:54:27.246749   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 23:54:27.246785   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 23:54:27.541035   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 23:54:27.550667   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0913 23:54:27.568506   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:54:27.584943   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 23:54:27.601274   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:54:27.605117   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:54:27.618022   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:27.735335   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:54:27.752814   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:54:27.753143   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:54:27.753189   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:54:27.768338   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0913 23:54:27.768723   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:54:27.769220   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:54:27.769249   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:54:27.769632   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:54:27.769801   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:54:27.769919   25213 start.go:317] joinCluster: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:54:27.770019   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 23:54:27.770041   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:54:27.773192   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:27.773721   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:54:27.773753   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:54:27.773906   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:54:27.774083   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:54:27.774229   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:54:27.774356   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:54:27.922400   25213 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:54:27.922447   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 99eyjh.xvl4qb8rfpz08c9j --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m02 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443"
	I0913 23:54:48.977645   25213 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 99eyjh.xvl4qb8rfpz08c9j --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m02 --control-plane --apiserver-advertise-address=192.168.39.6 --apiserver-bind-port=8443": (21.055167995s)
	I0913 23:54:48.977687   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 23:54:49.509013   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269-m02 minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=false
	I0913 23:54:49.635486   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-817269-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 23:54:49.761125   25213 start.go:319] duration metric: took 21.991200254s to joinCluster
	I0913 23:54:49.761213   25213 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:54:49.761544   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:54:49.763048   25213 out.go:177] * Verifying Kubernetes components...
	I0913 23:54:49.764369   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:54:50.131147   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:54:50.186487   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:54:50.186844   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 23:54:50.186921   25213 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0913 23:54:50.187195   25213 node_ready.go:35] waiting up to 6m0s for node "ha-817269-m02" to be "Ready" ...
	I0913 23:54:50.187294   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:50.187304   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:50.187315   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:50.187322   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:50.199575   25213 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0913 23:54:50.687643   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:50.687668   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:50.687680   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:50.687686   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:50.692326   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:51.187962   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:51.187984   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:51.187995   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:51.188001   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:51.192539   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:51.687410   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:51.687432   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:51.687440   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:51.687445   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:51.721753   25213 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0913 23:54:52.187514   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:52.187537   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:52.187545   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:52.187548   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:52.190674   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:52.191187   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:52.688115   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:52.688142   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:52.688180   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:52.688189   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:52.693536   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:54:53.187969   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:53.187995   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:53.188007   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:53.188013   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:53.191362   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:53.688269   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:53.688299   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:53.688306   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:53.688309   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:53.693338   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:54:54.188343   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:54.188367   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:54.188379   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:54.188387   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:54.191692   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:54.192292   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:54.687625   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:54.687647   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:54.687657   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:54.687663   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:54.691817   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:55.187473   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:55.187501   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:55.187514   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:55.187520   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:55.191375   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:55.687378   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:55.687402   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:55.687409   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:55.687412   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:55.691066   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.187872   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:56.187894   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:56.187906   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:56.187910   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:56.191038   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.687548   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:56.687572   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:56.687580   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:56.687583   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:56.690699   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:56.691229   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:57.187650   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:57.187674   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:57.187683   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:57.187689   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:57.191958   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:54:57.688274   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:57.688298   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:57.688305   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:57.688309   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:57.691840   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:58.188310   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:58.188332   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:58.188340   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:58.188343   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:58.191660   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:58.687463   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:58.687485   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:58.687493   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:58.687497   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:58.690202   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:54:59.188173   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:59.188194   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:59.188201   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:59.188205   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:59.191631   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:54:59.192106   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:54:59.687479   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:54:59.687500   25213 round_trippers.go:469] Request Headers:
	I0913 23:54:59.687508   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:54:59.687514   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:54:59.690851   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:00.187606   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:00.187628   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:00.187636   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:00.187640   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:00.190899   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:00.687871   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:00.687891   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:00.687900   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:00.687905   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:00.690961   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.187839   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:01.187863   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:01.187871   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:01.187874   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:01.191243   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.688094   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:01.688119   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:01.688129   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:01.688133   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:01.691175   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:01.691589   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:02.188070   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:02.188094   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:02.188102   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:02.188106   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:02.191108   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:02.688379   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:02.688401   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:02.688411   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:02.688417   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:02.691620   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.187906   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:03.187926   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:03.187934   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:03.187938   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:03.191160   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.688073   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:03.688096   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:03.688106   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:03.688110   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:03.691542   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:03.692067   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:04.187428   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:04.187455   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:04.187463   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:04.187467   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:04.190554   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:04.687487   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:04.687509   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:04.687518   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:04.687522   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:04.690777   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:05.187470   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:05.187492   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:05.187500   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:05.187504   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:05.190352   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:05.688410   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:05.688433   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:05.688440   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:05.688443   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:05.691726   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:05.692376   25213 node_ready.go:53] node "ha-817269-m02" has status "Ready":"False"
	I0913 23:55:06.188212   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:06.188234   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:06.188242   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:06.188246   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:06.191702   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:06.688026   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:06.688048   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:06.688057   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:06.688060   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:06.691182   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.188091   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.188114   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.188125   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.188132   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.191400   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.191964   25213 node_ready.go:49] node "ha-817269-m02" has status "Ready":"True"
	I0913 23:55:07.191983   25213 node_ready.go:38] duration metric: took 17.004770061s for node "ha-817269-m02" to be "Ready" ...
	I0913 23:55:07.191992   25213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:55:07.192081   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:07.192090   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.192097   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.192100   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.196407   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:07.202154   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.202236   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mwpbw
	I0913 23:55:07.202247   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.202254   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.202260   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.205115   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.205745   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.205761   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.205770   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.205774   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.208101   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.208585   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.208605   25213 pod_ready.go:82] duration metric: took 6.423802ms for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.208613   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.208663   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rq5pv
	I0913 23:55:07.208671   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.208677   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.208682   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.210997   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.211658   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.211675   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.211685   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.211689   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.213873   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.214435   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.214454   25213 pod_ready.go:82] duration metric: took 5.834238ms for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.214465   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.214523   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269
	I0913 23:55:07.214534   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.214543   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.214552   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.216590   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.217270   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.217287   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.217296   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.217303   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.219492   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.220051   25213 pod_ready.go:93] pod "etcd-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.220070   25213 pod_ready.go:82] duration metric: took 5.597775ms for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.220080   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.220133   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m02
	I0913 23:55:07.220164   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.220176   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.220186   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.222394   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.222973   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.222986   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.222993   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.222998   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.225189   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:55:07.225755   25213 pod_ready.go:93] pod "etcd-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.225774   25213 pod_ready.go:82] duration metric: took 5.686118ms for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.225792   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.389193   25213 request.go:632] Waited for 163.333572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:55:07.389282   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:55:07.389290   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.389300   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.389306   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.394402   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:55:07.588463   25213 request.go:632] Waited for 193.3812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.588523   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:07.588541   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.588548   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.588551   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.591806   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.592411   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.592429   25213 pod_ready.go:82] duration metric: took 366.63076ms for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.592439   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.788573   25213 request.go:632] Waited for 196.073848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:55:07.788630   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:55:07.788635   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.788642   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.788646   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.791885   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.988954   25213 request.go:632] Waited for 196.353296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.989035   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:07.989041   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:07.989048   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:07.989053   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:07.992088   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:07.992531   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:07.992554   25213 pod_ready.go:82] duration metric: took 400.10971ms for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:07.992564   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.188658   25213 request.go:632] Waited for 196.03691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:55:08.188720   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:55:08.188725   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.188732   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.188737   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.192380   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.388478   25213 request.go:632] Waited for 195.353706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:08.388555   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:08.388566   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.388576   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.388581   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.391860   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.392400   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:08.392421   25213 pod_ready.go:82] duration metric: took 399.850459ms for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.392431   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.588414   25213 request.go:632] Waited for 195.896935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:55:08.588589   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:55:08.588601   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.588609   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.588613   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.591801   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.788876   25213 request.go:632] Waited for 196.380536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:08.788939   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:08.788945   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.788956   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.788960   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.792396   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:08.793090   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:08.793111   25213 pod_ready.go:82] duration metric: took 400.671065ms for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.793120   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:08.989113   25213 request.go:632] Waited for 195.909002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:55:08.989169   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:55:08.989174   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:08.989181   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:08.989185   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:08.992406   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.188278   25213 request.go:632] Waited for 195.302069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:09.188371   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:09.188377   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.188384   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.188389   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.191646   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.192377   25213 pod_ready.go:93] pod "kube-proxy-7t9b2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.192399   25213 pod_ready.go:82] duration metric: took 399.27203ms for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.192411   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.388390   25213 request.go:632] Waited for 195.903787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:55:09.388439   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:55:09.388444   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.388451   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.388454   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.392179   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.588159   25213 request.go:632] Waited for 195.286849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.588215   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.588220   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.588227   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.588230   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.591515   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.592016   25213 pod_ready.go:93] pod "kube-proxy-p9lkl" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.592036   25213 pod_ready.go:82] duration metric: took 399.617448ms for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.592048   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.789123   25213 request.go:632] Waited for 196.975871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:55:09.789205   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:55:09.789210   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.789218   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.789222   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.802921   25213 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0913 23:55:09.989137   25213 request.go:632] Waited for 185.633841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.989231   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:55:09.989242   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:09.989261   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:09.989271   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:09.992888   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:09.993351   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:09.993369   25213 pod_ready.go:82] duration metric: took 401.314204ms for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:09.993382   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:10.188501   25213 request.go:632] Waited for 195.041759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:55:10.188585   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:55:10.188590   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.188597   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.188601   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.191888   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:10.388757   25213 request.go:632] Waited for 196.346001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:10.388807   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:55:10.388812   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.388820   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.388840   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.396204   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:55:10.396959   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:55:10.396988   25213 pod_ready.go:82] duration metric: took 403.599221ms for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:55:10.397003   25213 pod_ready.go:39] duration metric: took 3.204979529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:55:10.397022   25213 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:55:10.397088   25213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:55:10.413715   25213 api_server.go:72] duration metric: took 20.652464406s to wait for apiserver process to appear ...
	I0913 23:55:10.413752   25213 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:55:10.413777   25213 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0913 23:55:10.419955   25213 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0913 23:55:10.420058   25213 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0913 23:55:10.420069   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.420095   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.420105   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.421090   25213 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0913 23:55:10.421199   25213 api_server.go:141] control plane version: v1.31.1
	I0913 23:55:10.421218   25213 api_server.go:131] duration metric: took 7.458574ms to wait for apiserver health ...
	I0913 23:55:10.421225   25213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:55:10.588679   25213 request.go:632] Waited for 167.354613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.588742   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.588749   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.588760   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.588765   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.593508   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:10.600058   25213 system_pods.go:59] 17 kube-system pods found
	I0913 23:55:10.600090   25213 system_pods.go:61] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:55:10.600096   25213 system_pods.go:61] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:55:10.600100   25213 system_pods.go:61] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:55:10.600103   25213 system_pods.go:61] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:55:10.600107   25213 system_pods.go:61] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:55:10.600110   25213 system_pods.go:61] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:55:10.600113   25213 system_pods.go:61] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:55:10.600116   25213 system_pods.go:61] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:55:10.600120   25213 system_pods.go:61] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:55:10.600124   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:55:10.600127   25213 system_pods.go:61] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:55:10.600131   25213 system_pods.go:61] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:55:10.600136   25213 system_pods.go:61] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:55:10.600139   25213 system_pods.go:61] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:55:10.600142   25213 system_pods.go:61] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:55:10.600145   25213 system_pods.go:61] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:55:10.600148   25213 system_pods.go:61] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:55:10.600153   25213 system_pods.go:74] duration metric: took 178.923004ms to wait for pod list to return data ...
	I0913 23:55:10.600162   25213 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:55:10.788631   25213 request.go:632] Waited for 188.399764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:55:10.788695   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:55:10.788702   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.788712   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.788717   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.792847   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:55:10.793128   25213 default_sa.go:45] found service account: "default"
	I0913 23:55:10.793152   25213 default_sa.go:55] duration metric: took 192.982758ms for default service account to be created ...
	I0913 23:55:10.793162   25213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:55:10.988315   25213 request.go:632] Waited for 195.055947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.988389   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:55:10.988397   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:10.988407   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:10.988413   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:10.994679   25213 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 23:55:11.000285   25213 system_pods.go:86] 17 kube-system pods found
	I0913 23:55:11.000316   25213 system_pods.go:89] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:55:11.000322   25213 system_pods.go:89] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:55:11.000326   25213 system_pods.go:89] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:55:11.000330   25213 system_pods.go:89] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:55:11.000333   25213 system_pods.go:89] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:55:11.000337   25213 system_pods.go:89] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:55:11.000341   25213 system_pods.go:89] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:55:11.000346   25213 system_pods.go:89] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:55:11.000352   25213 system_pods.go:89] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:55:11.000358   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:55:11.000366   25213 system_pods.go:89] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:55:11.000371   25213 system_pods.go:89] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:55:11.000379   25213 system_pods.go:89] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:55:11.000384   25213 system_pods.go:89] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:55:11.000387   25213 system_pods.go:89] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:55:11.000390   25213 system_pods.go:89] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:55:11.000393   25213 system_pods.go:89] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:55:11.000399   25213 system_pods.go:126] duration metric: took 207.230473ms to wait for k8s-apps to be running ...
	I0913 23:55:11.000408   25213 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:55:11.000450   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:55:11.015127   25213 system_svc.go:56] duration metric: took 14.707803ms WaitForService to wait for kubelet
	I0913 23:55:11.015160   25213 kubeadm.go:582] duration metric: took 21.253914529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:55:11.015180   25213 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:55:11.188954   25213 request.go:632] Waited for 173.69537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0913 23:55:11.189014   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0913 23:55:11.189020   25213 round_trippers.go:469] Request Headers:
	I0913 23:55:11.189027   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:55:11.189030   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:55:11.192671   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:55:11.193541   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:55:11.193579   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:55:11.193591   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:55:11.193594   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:55:11.193599   25213 node_conditions.go:105] duration metric: took 178.414001ms to run NodePressure ...
	I0913 23:55:11.193609   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:55:11.193631   25213 start.go:255] writing updated cluster config ...
	I0913 23:55:11.196301   25213 out.go:201] 
	I0913 23:55:11.198620   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:11.198761   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:11.200318   25213 out.go:177] * Starting "ha-817269-m03" control-plane node in "ha-817269" cluster
	I0913 23:55:11.201674   25213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:55:11.201713   25213 cache.go:56] Caching tarball of preloaded images
	I0913 23:55:11.201816   25213 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0913 23:55:11.201827   25213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0913 23:55:11.201935   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:11.202132   25213 start.go:360] acquireMachinesLock for ha-817269-m03: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:55:11.202178   25213 start.go:364] duration metric: took 26.572µs to acquireMachinesLock for "ha-817269-m03"
	I0913 23:55:11.202195   25213 start.go:93] Provisioning new machine with config: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:55:11.202318   25213 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0913 23:55:11.203728   25213 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 23:55:11.203850   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:11.203887   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:11.218764   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I0913 23:55:11.219183   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:11.219676   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:11.219700   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:11.220096   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:11.220290   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:11.220405   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:11.220551   25213 start.go:159] libmachine.API.Create for "ha-817269" (driver="kvm2")
	I0913 23:55:11.220579   25213 client.go:168] LocalClient.Create starting
	I0913 23:55:11.220610   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0913 23:55:11.220649   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:55:11.220665   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:55:11.220727   25213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0913 23:55:11.220751   25213 main.go:141] libmachine: Decoding PEM data...
	I0913 23:55:11.220770   25213 main.go:141] libmachine: Parsing certificate...
	I0913 23:55:11.220794   25213 main.go:141] libmachine: Running pre-create checks...
	I0913 23:55:11.220804   25213 main.go:141] libmachine: (ha-817269-m03) Calling .PreCreateCheck
	I0913 23:55:11.220943   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:11.221365   25213 main.go:141] libmachine: Creating machine...
	I0913 23:55:11.221382   25213 main.go:141] libmachine: (ha-817269-m03) Calling .Create
	I0913 23:55:11.221507   25213 main.go:141] libmachine: (ha-817269-m03) Creating KVM machine...
	I0913 23:55:11.222693   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found existing default KVM network
	I0913 23:55:11.222906   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found existing private KVM network mk-ha-817269
	I0913 23:55:11.223033   25213 main.go:141] libmachine: (ha-817269-m03) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 ...
	I0913 23:55:11.223075   25213 main.go:141] libmachine: (ha-817269-m03) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:55:11.223152   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.223048   25987 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:55:11.223248   25213 main.go:141] libmachine: (ha-817269-m03) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0913 23:55:11.452469   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.452313   25987 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa...
	I0913 23:55:11.621065   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.620963   25987 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/ha-817269-m03.rawdisk...
	I0913 23:55:11.621096   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Writing magic tar header
	I0913 23:55:11.621109   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Writing SSH key tar header
	I0913 23:55:11.621119   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:11.621083   25987 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 ...
	I0913 23:55:11.621172   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03
	I0913 23:55:11.621213   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03 (perms=drwx------)
	I0913 23:55:11.621243   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0913 23:55:11.621257   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0913 23:55:11.621270   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:55:11.621284   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0913 23:55:11.621299   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0913 23:55:11.621307   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 23:55:11.621319   25213 main.go:141] libmachine: (ha-817269-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 23:55:11.621329   25213 main.go:141] libmachine: (ha-817269-m03) Creating domain...
	I0913 23:55:11.621344   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0913 23:55:11.621355   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 23:55:11.621365   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home/jenkins
	I0913 23:55:11.621370   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Checking permissions on dir: /home
	I0913 23:55:11.621377   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Skipping /home - not owner
	I0913 23:55:11.622208   25213 main.go:141] libmachine: (ha-817269-m03) define libvirt domain using xml: 
	I0913 23:55:11.622227   25213 main.go:141] libmachine: (ha-817269-m03) <domain type='kvm'>
	I0913 23:55:11.622236   25213 main.go:141] libmachine: (ha-817269-m03)   <name>ha-817269-m03</name>
	I0913 23:55:11.622248   25213 main.go:141] libmachine: (ha-817269-m03)   <memory unit='MiB'>2200</memory>
	I0913 23:55:11.622255   25213 main.go:141] libmachine: (ha-817269-m03)   <vcpu>2</vcpu>
	I0913 23:55:11.622262   25213 main.go:141] libmachine: (ha-817269-m03)   <features>
	I0913 23:55:11.622279   25213 main.go:141] libmachine: (ha-817269-m03)     <acpi/>
	I0913 23:55:11.622286   25213 main.go:141] libmachine: (ha-817269-m03)     <apic/>
	I0913 23:55:11.622295   25213 main.go:141] libmachine: (ha-817269-m03)     <pae/>
	I0913 23:55:11.622301   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622309   25213 main.go:141] libmachine: (ha-817269-m03)   </features>
	I0913 23:55:11.622316   25213 main.go:141] libmachine: (ha-817269-m03)   <cpu mode='host-passthrough'>
	I0913 23:55:11.622336   25213 main.go:141] libmachine: (ha-817269-m03)   
	I0913 23:55:11.622357   25213 main.go:141] libmachine: (ha-817269-m03)   </cpu>
	I0913 23:55:11.622399   25213 main.go:141] libmachine: (ha-817269-m03)   <os>
	I0913 23:55:11.622416   25213 main.go:141] libmachine: (ha-817269-m03)     <type>hvm</type>
	I0913 23:55:11.622430   25213 main.go:141] libmachine: (ha-817269-m03)     <boot dev='cdrom'/>
	I0913 23:55:11.622444   25213 main.go:141] libmachine: (ha-817269-m03)     <boot dev='hd'/>
	I0913 23:55:11.622457   25213 main.go:141] libmachine: (ha-817269-m03)     <bootmenu enable='no'/>
	I0913 23:55:11.622467   25213 main.go:141] libmachine: (ha-817269-m03)   </os>
	I0913 23:55:11.622476   25213 main.go:141] libmachine: (ha-817269-m03)   <devices>
	I0913 23:55:11.622486   25213 main.go:141] libmachine: (ha-817269-m03)     <disk type='file' device='cdrom'>
	I0913 23:55:11.622512   25213 main.go:141] libmachine: (ha-817269-m03)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/boot2docker.iso'/>
	I0913 23:55:11.622527   25213 main.go:141] libmachine: (ha-817269-m03)       <target dev='hdc' bus='scsi'/>
	I0913 23:55:11.622555   25213 main.go:141] libmachine: (ha-817269-m03)       <readonly/>
	I0913 23:55:11.622564   25213 main.go:141] libmachine: (ha-817269-m03)     </disk>
	I0913 23:55:11.622585   25213 main.go:141] libmachine: (ha-817269-m03)     <disk type='file' device='disk'>
	I0913 23:55:11.622601   25213 main.go:141] libmachine: (ha-817269-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 23:55:11.622617   25213 main.go:141] libmachine: (ha-817269-m03)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/ha-817269-m03.rawdisk'/>
	I0913 23:55:11.622628   25213 main.go:141] libmachine: (ha-817269-m03)       <target dev='hda' bus='virtio'/>
	I0913 23:55:11.622640   25213 main.go:141] libmachine: (ha-817269-m03)     </disk>
	I0913 23:55:11.622650   25213 main.go:141] libmachine: (ha-817269-m03)     <interface type='network'>
	I0913 23:55:11.622662   25213 main.go:141] libmachine: (ha-817269-m03)       <source network='mk-ha-817269'/>
	I0913 23:55:11.622676   25213 main.go:141] libmachine: (ha-817269-m03)       <model type='virtio'/>
	I0913 23:55:11.622686   25213 main.go:141] libmachine: (ha-817269-m03)     </interface>
	I0913 23:55:11.622694   25213 main.go:141] libmachine: (ha-817269-m03)     <interface type='network'>
	I0913 23:55:11.622707   25213 main.go:141] libmachine: (ha-817269-m03)       <source network='default'/>
	I0913 23:55:11.622717   25213 main.go:141] libmachine: (ha-817269-m03)       <model type='virtio'/>
	I0913 23:55:11.622728   25213 main.go:141] libmachine: (ha-817269-m03)     </interface>
	I0913 23:55:11.622738   25213 main.go:141] libmachine: (ha-817269-m03)     <serial type='pty'>
	I0913 23:55:11.622763   25213 main.go:141] libmachine: (ha-817269-m03)       <target port='0'/>
	I0913 23:55:11.622784   25213 main.go:141] libmachine: (ha-817269-m03)     </serial>
	I0913 23:55:11.622797   25213 main.go:141] libmachine: (ha-817269-m03)     <console type='pty'>
	I0913 23:55:11.622808   25213 main.go:141] libmachine: (ha-817269-m03)       <target type='serial' port='0'/>
	I0913 23:55:11.622818   25213 main.go:141] libmachine: (ha-817269-m03)     </console>
	I0913 23:55:11.622827   25213 main.go:141] libmachine: (ha-817269-m03)     <rng model='virtio'>
	I0913 23:55:11.622840   25213 main.go:141] libmachine: (ha-817269-m03)       <backend model='random'>/dev/random</backend>
	I0913 23:55:11.622849   25213 main.go:141] libmachine: (ha-817269-m03)     </rng>
	I0913 23:55:11.622873   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622891   25213 main.go:141] libmachine: (ha-817269-m03)     
	I0913 23:55:11.622905   25213 main.go:141] libmachine: (ha-817269-m03)   </devices>
	I0913 23:55:11.622920   25213 main.go:141] libmachine: (ha-817269-m03) </domain>
	I0913 23:55:11.622934   25213 main.go:141] libmachine: (ha-817269-m03) 
	I0913 23:55:11.629334   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:84:b2:76 in network default
	I0913 23:55:11.630117   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring networks are active...
	I0913 23:55:11.630138   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:11.631149   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring network default is active
	I0913 23:55:11.631540   25213 main.go:141] libmachine: (ha-817269-m03) Ensuring network mk-ha-817269 is active
	I0913 23:55:11.631902   25213 main.go:141] libmachine: (ha-817269-m03) Getting domain xml...
	I0913 23:55:11.632697   25213 main.go:141] libmachine: (ha-817269-m03) Creating domain...
	I0913 23:55:12.885338   25213 main.go:141] libmachine: (ha-817269-m03) Waiting to get IP...
	I0913 23:55:12.886050   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:12.886515   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:12.886576   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:12.886511   25987 retry.go:31] will retry after 211.035695ms: waiting for machine to come up
	I0913 23:55:13.099147   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.099717   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.099749   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.099656   25987 retry.go:31] will retry after 388.168891ms: waiting for machine to come up
	I0913 23:55:13.489393   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.489932   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.489960   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.489868   25987 retry.go:31] will retry after 357.451576ms: waiting for machine to come up
	I0913 23:55:13.849615   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:13.850201   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:13.850231   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:13.850128   25987 retry.go:31] will retry after 521.54606ms: waiting for machine to come up
	I0913 23:55:14.373576   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:14.374080   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:14.374110   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:14.374036   25987 retry.go:31] will retry after 627.057001ms: waiting for machine to come up
	I0913 23:55:15.002951   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:15.003486   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:15.003519   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:15.003440   25987 retry.go:31] will retry after 836.491577ms: waiting for machine to come up
	I0913 23:55:15.842251   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:15.842854   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:15.842973   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:15.842795   25987 retry.go:31] will retry after 722.977468ms: waiting for machine to come up
	I0913 23:55:16.566838   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:16.567174   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:16.567193   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:16.567153   25987 retry.go:31] will retry after 1.232147704s: waiting for machine to come up
	I0913 23:55:17.801545   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:17.802055   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:17.802083   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:17.801996   25987 retry.go:31] will retry after 1.803928933s: waiting for machine to come up
	I0913 23:55:19.607646   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:19.608127   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:19.608163   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:19.608067   25987 retry.go:31] will retry after 1.861415984s: waiting for machine to come up
	I0913 23:55:21.470570   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:21.471074   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:21.471105   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:21.471011   25987 retry.go:31] will retry after 2.818653272s: waiting for machine to come up
	I0913 23:55:24.292810   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:24.293254   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:24.293280   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:24.293213   25987 retry.go:31] will retry after 3.152954921s: waiting for machine to come up
	I0913 23:55:27.448595   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:27.449217   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:27.449240   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:27.449126   25987 retry.go:31] will retry after 3.308883019s: waiting for machine to come up
	I0913 23:55:30.761625   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:30.762119   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find current IP address of domain ha-817269-m03 in network mk-ha-817269
	I0913 23:55:30.762141   25213 main.go:141] libmachine: (ha-817269-m03) DBG | I0913 23:55:30.762080   25987 retry.go:31] will retry after 3.90905092s: waiting for machine to come up
	I0913 23:55:34.675349   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.675970   25213 main.go:141] libmachine: (ha-817269-m03) Found IP for machine: 192.168.39.68
	I0913 23:55:34.676002   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has current primary IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.676010   25213 main.go:141] libmachine: (ha-817269-m03) Reserving static IP address...
	I0913 23:55:34.676443   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find host DHCP lease matching {name: "ha-817269-m03", mac: "52:54:00:61:13:06", ip: "192.168.39.68"} in network mk-ha-817269
	I0913 23:55:34.769785   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Getting to WaitForSSH function...
	I0913 23:55:34.769817   25213 main.go:141] libmachine: (ha-817269-m03) Reserved static IP address: 192.168.39.68
	I0913 23:55:34.769831   25213 main.go:141] libmachine: (ha-817269-m03) Waiting for SSH to be available...
	I0913 23:55:34.775622   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:34.776439   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269
	I0913 23:55:34.776479   25213 main.go:141] libmachine: (ha-817269-m03) DBG | unable to find defined IP address of network mk-ha-817269 interface with MAC address 52:54:00:61:13:06
	I0913 23:55:34.776708   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH client type: external
	I0913 23:55:34.776735   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa (-rw-------)
	I0913 23:55:34.776825   25213 main.go:141] libmachine: (ha-817269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:55:34.776857   25213 main.go:141] libmachine: (ha-817269-m03) DBG | About to run SSH command:
	I0913 23:55:34.776871   25213 main.go:141] libmachine: (ha-817269-m03) DBG | exit 0
	I0913 23:55:34.781306   25213 main.go:141] libmachine: (ha-817269-m03) DBG | SSH cmd err, output: exit status 255: 
	I0913 23:55:34.781345   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0913 23:55:34.781360   25213 main.go:141] libmachine: (ha-817269-m03) DBG | command : exit 0
	I0913 23:55:34.781372   25213 main.go:141] libmachine: (ha-817269-m03) DBG | err     : exit status 255
	I0913 23:55:34.781384   25213 main.go:141] libmachine: (ha-817269-m03) DBG | output  : 
	I0913 23:55:37.782710   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Getting to WaitForSSH function...
	I0913 23:55:37.785389   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.785839   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:37.785869   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.785948   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH client type: external
	I0913 23:55:37.785986   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa (-rw-------)
	I0913 23:55:37.786020   25213 main.go:141] libmachine: (ha-817269-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 23:55:37.786036   25213 main.go:141] libmachine: (ha-817269-m03) DBG | About to run SSH command:
	I0913 23:55:37.786048   25213 main.go:141] libmachine: (ha-817269-m03) DBG | exit 0
	I0913 23:55:37.915763   25213 main.go:141] libmachine: (ha-817269-m03) DBG | SSH cmd err, output: <nil>: 
	I0913 23:55:37.916036   25213 main.go:141] libmachine: (ha-817269-m03) KVM machine creation complete!
	I0913 23:55:37.916415   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:37.916905   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:37.917087   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:37.917268   25213 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 23:55:37.917281   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0913 23:55:37.918589   25213 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 23:55:37.918604   25213 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 23:55:37.918612   25213 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 23:55:37.918619   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:37.920683   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.921030   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:37.921057   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:37.921338   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:37.921503   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:37.921654   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:37.921766   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:37.921912   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:37.922154   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:37.922167   25213 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 23:55:38.031557   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:55:38.031586   25213 main.go:141] libmachine: Detecting the provisioner...
	I0913 23:55:38.031596   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.036277   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.036740   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.036769   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.037074   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.037301   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.037606   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.037796   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.038049   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.038206   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.038216   25213 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 23:55:38.148130   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 23:55:38.148201   25213 main.go:141] libmachine: found compatible host: buildroot
	I0913 23:55:38.148214   25213 main.go:141] libmachine: Provisioning with buildroot...
	I0913 23:55:38.148223   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.148465   25213 buildroot.go:166] provisioning hostname "ha-817269-m03"
	I0913 23:55:38.148501   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.148678   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.151235   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.151575   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.151600   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.151727   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.151899   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.152076   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.152210   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.152370   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.152575   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.152586   25213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269-m03 && echo "ha-817269-m03" | sudo tee /etc/hostname
	I0913 23:55:38.278480   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269-m03
	
	I0913 23:55:38.278510   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.281122   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.281471   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.281511   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.281738   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.281907   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.282050   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.282161   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.282293   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.282451   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.282467   25213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:55:38.400641   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:55:38.400677   25213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0913 23:55:38.400699   25213 buildroot.go:174] setting up certificates
	I0913 23:55:38.400709   25213 provision.go:84] configureAuth start
	I0913 23:55:38.400721   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetMachineName
	I0913 23:55:38.401032   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:38.403609   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.403981   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.404002   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.404189   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.406061   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.406400   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.406442   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.406562   25213 provision.go:143] copyHostCerts
	I0913 23:55:38.406592   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:55:38.406633   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0913 23:55:38.406646   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0913 23:55:38.406730   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0913 23:55:38.406838   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:55:38.406871   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0913 23:55:38.406880   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0913 23:55:38.406922   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0913 23:55:38.407004   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:55:38.407029   25213 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0913 23:55:38.407038   25213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0913 23:55:38.407076   25213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0913 23:55:38.407157   25213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269-m03 san=[127.0.0.1 192.168.39.68 ha-817269-m03 localhost minikube]
	I0913 23:55:38.545052   25213 provision.go:177] copyRemoteCerts
	I0913 23:55:38.545118   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:55:38.545149   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.548022   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.548345   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.548374   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.548530   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.548691   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.548816   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.548921   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:38.634530   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0913 23:55:38.634612   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:55:38.658715   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0913 23:55:38.658796   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:55:38.683540   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0913 23:55:38.683602   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 23:55:38.710001   25213 provision.go:87] duration metric: took 309.277958ms to configureAuth
	I0913 23:55:38.710030   25213 buildroot.go:189] setting minikube options for container-runtime
	I0913 23:55:38.710267   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:38.710353   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.713112   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.713542   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.713571   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.713691   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.713871   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.714037   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.714151   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.714301   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:38.714452   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:38.714464   25213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0913 23:55:38.934725   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0913 23:55:38.934751   25213 main.go:141] libmachine: Checking connection to Docker...
	I0913 23:55:38.934759   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetURL
	I0913 23:55:38.936292   25213 main.go:141] libmachine: (ha-817269-m03) DBG | Using libvirt version 6000000
	I0913 23:55:38.938608   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.938961   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.938987   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.939166   25213 main.go:141] libmachine: Docker is up and running!
	I0913 23:55:38.939186   25213 main.go:141] libmachine: Reticulating splines...
	I0913 23:55:38.939193   25213 client.go:171] duration metric: took 27.718607432s to LocalClient.Create
	I0913 23:55:38.939218   25213 start.go:167] duration metric: took 27.718669613s to libmachine.API.Create "ha-817269"
	I0913 23:55:38.939231   25213 start.go:293] postStartSetup for "ha-817269-m03" (driver="kvm2")
	I0913 23:55:38.939243   25213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:55:38.939265   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:38.939552   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:55:38.939572   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:38.941660   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.942028   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:38.942051   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:38.942268   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:38.942449   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:38.942604   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:38.942708   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.027301   25213 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:55:39.031737   25213 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 23:55:39.031768   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0913 23:55:39.031854   25213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0913 23:55:39.031944   25213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0913 23:55:39.031958   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0913 23:55:39.032065   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 23:55:39.041881   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:55:39.066826   25213 start.go:296] duration metric: took 127.580682ms for postStartSetup
	I0913 23:55:39.066888   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetConfigRaw
	I0913 23:55:39.067543   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:39.070333   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.070878   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.070918   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.071273   25213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0913 23:55:39.071507   25213 start.go:128] duration metric: took 27.869178264s to createHost
	I0913 23:55:39.071535   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:39.073969   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.074394   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.074421   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.074589   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.074788   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.074927   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.075046   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.075189   25213 main.go:141] libmachine: Using SSH client type: native
	I0913 23:55:39.075409   25213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0913 23:55:39.075424   25213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 23:55:39.184310   25213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726271739.166205196
	
	I0913 23:55:39.184335   25213 fix.go:216] guest clock: 1726271739.166205196
	I0913 23:55:39.184343   25213 fix.go:229] Guest: 2024-09-13 23:55:39.166205196 +0000 UTC Remote: 2024-09-13 23:55:39.07151977 +0000 UTC m=+148.114736673 (delta=94.685426ms)
	I0913 23:55:39.184358   25213 fix.go:200] guest clock delta is within tolerance: 94.685426ms
	I0913 23:55:39.184365   25213 start.go:83] releasing machines lock for "ha-817269-m03", held for 27.982177413s
	I0913 23:55:39.184388   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.184673   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:39.187546   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.187968   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.187993   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.190368   25213 out.go:177] * Found network options:
	I0913 23:55:39.191781   25213 out.go:177]   - NO_PROXY=192.168.39.132,192.168.39.6
	W0913 23:55:39.192966   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 23:55:39.192994   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:55:39.193015   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193603   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193787   25213 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0913 23:55:39.193862   25213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:55:39.193908   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	W0913 23:55:39.193976   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	W0913 23:55:39.194010   25213 proxy.go:119] fail to check proxy env: Error ip not in block
	I0913 23:55:39.194083   25213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0913 23:55:39.194104   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0913 23:55:39.196854   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197126   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197332   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.197364   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197535   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.197593   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:39.197617   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:39.197693   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.197770   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0913 23:55:39.197835   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.197901   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0913 23:55:39.197994   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0913 23:55:39.197994   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.198151   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0913 23:55:39.437719   25213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 23:55:39.443613   25213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:55:39.443689   25213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:55:39.459332   25213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 23:55:39.459363   25213 start.go:495] detecting cgroup driver to use...
	I0913 23:55:39.459460   25213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 23:55:39.476630   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 23:55:39.490488   25213 docker.go:217] disabling cri-docker service (if available) ...
	I0913 23:55:39.490557   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0913 23:55:39.504494   25213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0913 23:55:39.517473   25213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0913 23:55:39.626063   25213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0913 23:55:39.780927   25213 docker.go:233] disabling docker service ...
	I0913 23:55:39.781009   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0913 23:55:39.796182   25213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0913 23:55:39.811125   25213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0913 23:55:39.942539   25213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0913 23:55:40.073069   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0913 23:55:40.088262   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:55:40.106653   25213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0913 23:55:40.106723   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.116597   25213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0913 23:55:40.116661   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.126249   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.136027   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.147405   25213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:55:40.158939   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.170015   25213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.186803   25213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0913 23:55:40.196896   25213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:55:40.205832   25213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0913 23:55:40.205891   25213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0913 23:55:40.218759   25213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:55:40.227617   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:55:40.355751   25213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0913 23:55:40.454384   25213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0913 23:55:40.454455   25213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0913 23:55:40.459832   25213 start.go:563] Will wait 60s for crictl version
	I0913 23:55:40.459907   25213 ssh_runner.go:195] Run: which crictl
	I0913 23:55:40.463809   25213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:55:40.503536   25213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0913 23:55:40.503626   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:55:40.530448   25213 ssh_runner.go:195] Run: crio --version
	I0913 23:55:40.559290   25213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0913 23:55:40.560767   25213 out.go:177]   - env NO_PROXY=192.168.39.132
	I0913 23:55:40.562083   25213 out.go:177]   - env NO_PROXY=192.168.39.132,192.168.39.6
	I0913 23:55:40.563716   25213 main.go:141] libmachine: (ha-817269-m03) Calling .GetIP
	I0913 23:55:40.566613   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:40.566935   25213 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0913 23:55:40.566960   25213 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0913 23:55:40.567188   25213 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 23:55:40.571410   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:55:40.583511   25213 mustload.go:65] Loading cluster: ha-817269
	I0913 23:55:40.583744   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:55:40.584024   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:40.584063   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:40.600039   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I0913 23:55:40.600465   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:40.600930   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:40.600952   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:40.601284   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:40.601492   25213 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0913 23:55:40.603219   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:55:40.603501   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:40.603556   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:40.618991   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0913 23:55:40.619430   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:40.620021   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:40.620043   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:40.620349   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:40.620505   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:55:40.620651   25213 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.68
	I0913 23:55:40.620661   25213 certs.go:194] generating shared ca certs ...
	I0913 23:55:40.620674   25213 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.620787   25213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0913 23:55:40.620825   25213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0913 23:55:40.620834   25213 certs.go:256] generating profile certs ...
	I0913 23:55:40.620900   25213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0913 23:55:40.620923   25213 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c
	I0913 23:55:40.620937   25213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.68 192.168.39.254]
	I0913 23:55:40.830651   25213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c ...
	I0913 23:55:40.830684   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c: {Name:mk8d9024110bfeb203b6e91f0e321306ad905077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.830883   25213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c ...
	I0913 23:55:40.830902   25213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c: {Name:mk34f5bcfc1f2ed41966070859698727dcacea18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:55:40.831174   25213 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.2d3a034c -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0913 23:55:40.831382   25213 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.2d3a034c -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0913 23:55:40.831584   25213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0913 23:55:40.831601   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0913 23:55:40.831614   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0913 23:55:40.831624   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:55:40.831642   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:55:40.831656   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0913 23:55:40.831675   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0913 23:55:40.831748   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0913 23:55:40.843975   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0913 23:55:40.844071   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0913 23:55:40.844125   25213 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0913 23:55:40.844140   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:55:40.844169   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:55:40.844205   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:55:40.844234   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0913 23:55:40.844289   25213 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0913 23:55:40.844327   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0913 23:55:40.844348   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0913 23:55:40.844365   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:40.844412   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:55:40.847079   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:40.847635   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:55:40.847664   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:40.847873   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:55:40.848067   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:55:40.848231   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:55:40.848393   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:55:40.924186   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0913 23:55:40.929403   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0913 23:55:40.941602   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0913 23:55:40.946504   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0913 23:55:40.961116   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0913 23:55:40.967162   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0913 23:55:40.979653   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0913 23:55:40.984703   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0913 23:55:40.999184   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0913 23:55:41.009470   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0913 23:55:41.023915   25213 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0913 23:55:41.029256   25213 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0913 23:55:41.041387   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:55:41.066439   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:55:41.093525   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:55:41.120996   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:55:41.144983   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0913 23:55:41.168361   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 23:55:41.196357   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:55:41.219491   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:55:41.241960   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0913 23:55:41.265413   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0913 23:55:41.289840   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:55:41.315154   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0913 23:55:41.331886   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0913 23:55:41.350688   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0913 23:55:41.369557   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0913 23:55:41.386519   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0913 23:55:41.402677   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0913 23:55:41.421240   25213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0913 23:55:41.439499   25213 ssh_runner.go:195] Run: openssl version
	I0913 23:55:41.445412   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0913 23:55:41.456446   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.461070   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.461133   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0913 23:55:41.467115   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0913 23:55:41.478381   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0913 23:55:41.489389   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.494212   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.494273   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0913 23:55:41.499662   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 23:55:41.510112   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:55:41.520338   25213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.524729   25213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.524790   25213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:55:41.529996   25213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:55:41.540659   25213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:55:41.544673   25213 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:55:41.544722   25213 kubeadm.go:934] updating node {m03 192.168.39.68 8443 v1.31.1 crio true true} ...
	I0913 23:55:41.544802   25213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:55:41.544835   25213 kube-vip.go:115] generating kube-vip config ...
	I0913 23:55:41.544873   25213 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0913 23:55:41.562996   25213 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0913 23:55:41.563080   25213 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0913 23:55:41.563143   25213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:55:41.573436   25213 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 23:55:41.573508   25213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 23:55:41.582907   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 23:55:41.582953   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 23:55:41.582978   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:55:41.582956   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:55:41.583044   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:55:41.582997   25213 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 23:55:41.583086   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:55:41.583149   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:55:41.587398   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0913 23:55:41.587427   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 23:55:41.626330   25213 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:55:41.626331   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0913 23:55:41.626404   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 23:55:41.626448   25213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:55:41.662177   25213 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0913 23:55:41.662209   25213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 23:55:42.532205   25213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0913 23:55:42.542442   25213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:55:42.565043   25213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:55:42.583282   25213 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0913 23:55:42.600855   25213 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0913 23:55:42.606296   25213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:55:42.620005   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:55:42.757672   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:55:42.780453   25213 host.go:66] Checking if "ha-817269" exists ...
	I0913 23:55:42.780941   25213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:55:42.780995   25213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:55:42.796895   25213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0913 23:55:42.797413   25213 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:55:42.797966   25213 main.go:141] libmachine: Using API Version  1
	I0913 23:55:42.797992   25213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:55:42.798351   25213 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:55:42.798658   25213 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0913 23:55:42.798828   25213 start.go:317] joinCluster: &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:55:42.798981   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0913 23:55:42.798997   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0913 23:55:42.802191   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:42.802836   25213 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0913 23:55:42.802876   25213 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0913 23:55:42.803187   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0913 23:55:42.803393   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0913 23:55:42.803545   25213 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0913 23:55:42.803740   25213 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0913 23:55:42.971542   25213 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:55:42.971616   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gwzhzn.g3aaqj2b0yiq46n6 --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m03 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0913 23:56:05.142372   25213 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gwzhzn.g3aaqj2b0yiq46n6 --discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-817269-m03 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (22.170717167s)
	I0913 23:56:05.142458   25213 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0913 23:56:05.674909   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-817269-m03 minikube.k8s.io/updated_at=2024_09_13T23_56_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=ha-817269 minikube.k8s.io/primary=false
	I0913 23:56:05.801046   25213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-817269-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0913 23:56:05.913224   25213 start.go:319] duration metric: took 23.11439217s to joinCluster
	I0913 23:56:05.913327   25213 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0913 23:56:05.913665   25213 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:56:05.915500   25213 out.go:177] * Verifying Kubernetes components...
	I0913 23:56:05.917249   25213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:56:06.263931   25213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:56:06.296340   25213 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:56:06.296627   25213 kapi.go:59] client config for ha-817269: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0913 23:56:06.296685   25213 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0913 23:56:06.296925   25213 node_ready.go:35] waiting up to 6m0s for node "ha-817269-m03" to be "Ready" ...
	I0913 23:56:06.297004   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:06.297015   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:06.297026   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:06.297037   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:06.302158   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:06.797370   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:06.797406   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:06.797416   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:06.797421   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:06.801509   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:07.297993   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:07.298018   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:07.298028   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:07.298034   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:07.305273   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:56:07.797521   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:07.797550   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:07.797562   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:07.797623   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:07.801648   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:08.297396   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:08.297416   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:08.297427   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:08.297432   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:08.301194   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:08.301681   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:08.798082   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:08.798104   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:08.798113   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:08.798116   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:08.801950   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:09.297895   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:09.297918   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:09.297928   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:09.297935   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:09.301967   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:09.797541   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:09.797564   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:09.797585   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:09.797591   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:09.801453   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:10.297951   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:10.298002   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:10.298015   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:10.298021   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:10.301801   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:10.302565   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:10.797472   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:10.797498   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:10.797509   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:10.797516   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:10.804790   25213 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0913 23:56:11.298129   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:11.298156   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:11.298168   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:11.298173   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:11.304102   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:11.797183   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:11.797210   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:11.797222   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:11.797229   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:11.800566   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:12.297496   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:12.297520   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:12.297543   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:12.297550   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:12.303217   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:12.303762   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:12.798011   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:12.798033   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:12.798042   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:12.798046   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:12.801544   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:13.297811   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:13.297840   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:13.297851   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:13.297856   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:13.301925   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:13.797462   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:13.797487   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:13.797497   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:13.797504   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:13.803000   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:14.297500   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:14.297524   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:14.297533   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:14.297540   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:14.300969   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:14.797844   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:14.797866   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:14.797874   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:14.797878   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:14.801296   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:14.801808   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:15.298067   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:15.298092   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:15.298103   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:15.298108   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:15.302364   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:15.798086   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:15.798110   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:15.798121   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:15.798128   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:15.801671   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.297899   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:16.297923   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:16.297933   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:16.297941   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:16.301733   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.797903   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:16.797924   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:16.797930   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:16.797934   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:16.801188   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:16.801889   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:17.297931   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:17.297958   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:17.297969   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:17.297974   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:17.301670   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:17.797324   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:17.797344   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:17.797352   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:17.797356   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:17.800407   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:18.297761   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:18.297783   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:18.297791   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:18.297795   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:18.301408   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:18.797218   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:18.797241   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:18.797251   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:18.797256   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:18.800549   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:19.297907   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:19.297943   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:19.297955   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:19.297961   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:19.302619   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:19.303156   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:19.797514   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:19.797545   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:19.797554   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:19.797559   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:19.801908   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:20.297956   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:20.297980   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:20.297988   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:20.297992   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:20.301530   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:20.797290   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:20.797315   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:20.797323   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:20.797329   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:20.800694   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.297897   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:21.297922   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:21.297932   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:21.297937   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:21.301206   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.797072   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:21.797093   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:21.797100   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:21.797104   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:21.800800   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:21.801249   25213 node_ready.go:53] node "ha-817269-m03" has status "Ready":"False"
	I0913 23:56:22.297565   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:22.297594   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:22.297605   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:22.297612   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:22.300844   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:22.797793   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:22.797813   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:22.797821   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:22.797825   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:22.801592   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.298061   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:23.298089   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.298097   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.298101   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.301909   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.797760   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:23.797785   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.797795   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.797812   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.801140   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.801695   25213 node_ready.go:49] node "ha-817269-m03" has status "Ready":"True"
	I0913 23:56:23.801716   25213 node_ready.go:38] duration metric: took 17.504775301s for node "ha-817269-m03" to be "Ready" ...
	I0913 23:56:23.801723   25213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:56:23.801842   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:23.801857   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.801867   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.801873   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.807883   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:23.813808   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.813882   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mwpbw
	I0913 23:56:23.813891   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.813898   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.813902   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.816951   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:23.817512   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.817528   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.817535   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.817539   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.820127   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.820758   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.820776   25213 pod_ready.go:82] duration metric: took 6.945529ms for pod "coredns-7c65d6cfc9-mwpbw" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.820785   25213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.820833   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rq5pv
	I0913 23:56:23.820840   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.820847   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.820854   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.823433   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.824054   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.824067   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.824074   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.824078   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.826288   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.826781   25213 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.826795   25213 pod_ready.go:82] duration metric: took 6.004504ms for pod "coredns-7c65d6cfc9-rq5pv" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.826803   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.826849   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269
	I0913 23:56:23.826856   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.826862   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.826866   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.829007   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.829506   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:23.829518   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.829524   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.829528   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.831794   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.832529   25213 pod_ready.go:93] pod "etcd-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.832547   25213 pod_ready.go:82] duration metric: took 5.737477ms for pod "etcd-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.832558   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.832617   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m02
	I0913 23:56:23.832627   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.832636   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.832643   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.835171   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.835846   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:23.835861   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.835870   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.835877   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:23.838476   25213 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0913 23:56:23.839058   25213 pod_ready.go:93] pod "etcd-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:23.839074   25213 pod_ready.go:82] duration metric: took 6.509005ms for pod "etcd-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.839082   25213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:23.998534   25213 request.go:632] Waited for 159.393284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m03
	I0913 23:56:23.998602   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-817269-m03
	I0913 23:56:23.998610   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:23.998621   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:23.998684   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.002242   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.198547   25213 request.go:632] Waited for 195.406667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:24.198647   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:24.198656   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.198668   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.198678   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.202087   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.202576   25213 pod_ready.go:93] pod "etcd-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:24.202594   25213 pod_ready.go:82] duration metric: took 363.505982ms for pod "etcd-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.202618   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.398825   25213 request.go:632] Waited for 196.129506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:56:24.398907   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269
	I0913 23:56:24.398918   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.398928   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.398936   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.402563   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.598546   25213 request.go:632] Waited for 195.348374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:24.598598   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:24.598602   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.598612   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.598619   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.601840   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:24.602399   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:24.602425   25213 pod_ready.go:82] duration metric: took 399.795885ms for pod "kube-apiserver-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.602439   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:24.798583   25213 request.go:632] Waited for 196.054862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:56:24.798653   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m02
	I0913 23:56:24.798662   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.798673   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.798683   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:24.802927   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:24.998623   25213 request.go:632] Waited for 194.658729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:24.998687   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:24.998694   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:24.998705   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:24.998710   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.002493   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.003098   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.003126   25213 pod_ready.go:82] duration metric: took 400.679484ms for pod "kube-apiserver-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.003137   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.198324   25213 request.go:632] Waited for 195.110224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m03
	I0913 23:56:25.198399   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-817269-m03
	I0913 23:56:25.198405   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.198413   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.198420   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.202304   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.398755   25213 request.go:632] Waited for 195.370574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:25.398822   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:25.398843   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.398852   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.398859   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.403809   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:25.404315   25213 pod_ready.go:93] pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.404346   25213 pod_ready.go:82] duration metric: took 401.203093ms for pod "kube-apiserver-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.404360   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.598435   25213 request.go:632] Waited for 193.996636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:56:25.598511   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269
	I0913 23:56:25.598518   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.598528   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.598537   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.602490   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.798212   25213 request.go:632] Waited for 194.91139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:25.798292   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:25.798299   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.798316   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.798325   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:25.802071   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:25.802525   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:25.802542   25213 pod_ready.go:82] duration metric: took 398.175427ms for pod "kube-controller-manager-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.802552   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:25.997820   25213 request.go:632] Waited for 195.20112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:56:25.997900   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m02
	I0913 23:56:25.997912   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:25.997923   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:25.997929   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.002077   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:26.198089   25213 request.go:632] Waited for 195.190135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.198184   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.198196   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.198221   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.198226   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.201626   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:26.202138   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:26.202158   25213 pod_ready.go:82] duration metric: took 399.597741ms for pod "kube-controller-manager-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.202169   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.398681   25213 request.go:632] Waited for 196.449711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m03
	I0913 23:56:26.398743   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-817269-m03
	I0913 23:56:26.398750   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.398759   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.398769   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.402887   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:26.598749   25213 request.go:632] Waited for 195.194054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:26.598809   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:26.598813   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.598820   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.598825   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.602277   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:26.602742   25213 pod_ready.go:93] pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:26.602760   25213 pod_ready.go:82] duration metric: took 400.584781ms for pod "kube-controller-manager-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.602777   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:26.797953   25213 request.go:632] Waited for 195.085414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:56:26.798138   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7t9b2
	I0913 23:56:26.798156   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.798167   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.798175   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:26.874051   25213 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0913 23:56:26.998492   25213 request.go:632] Waited for 123.27371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.998588   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:26.998598   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:26.998605   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:26.998608   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.002582   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.003238   25213 pod_ready.go:93] pod "kube-proxy-7t9b2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.003259   25213 pod_ready.go:82] duration metric: took 400.472179ms for pod "kube-proxy-7t9b2" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.003269   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwr6g" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.198305   25213 request.go:632] Waited for 194.97488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bwr6g
	I0913 23:56:27.198364   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bwr6g
	I0913 23:56:27.198381   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.198391   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.198396   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.201758   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.398783   25213 request.go:632] Waited for 196.370557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:27.398856   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:27.398863   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.398870   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.398873   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.402245   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.402848   25213 pod_ready.go:93] pod "kube-proxy-bwr6g" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.402873   25213 pod_ready.go:82] duration metric: took 399.594924ms for pod "kube-proxy-bwr6g" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.402887   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.597861   25213 request.go:632] Waited for 194.878811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:56:27.597933   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p9lkl
	I0913 23:56:27.597941   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.597950   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.597959   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.601252   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.797936   25213 request.go:632] Waited for 196.027185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:27.798005   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:27.798011   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.798021   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:27.798027   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.801636   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:27.802301   25213 pod_ready.go:93] pod "kube-proxy-p9lkl" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:27.802323   25213 pod_ready.go:82] duration metric: took 399.427432ms for pod "kube-proxy-p9lkl" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.802335   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:27.998667   25213 request.go:632] Waited for 196.261463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:56:27.998757   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269
	I0913 23:56:27.998765   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:27.998780   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:27.998789   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.002402   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.198469   25213 request.go:632] Waited for 195.365117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:28.198543   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269
	I0913 23:56:28.198548   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.198567   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.198575   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.201614   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.202191   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:28.202209   25213 pod_ready.go:82] duration metric: took 399.86721ms for pod "kube-scheduler-ha-817269" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.202219   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.398302   25213 request.go:632] Waited for 196.02284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:56:28.398364   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m02
	I0913 23:56:28.398374   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.398383   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.398400   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.401753   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:28.598735   25213 request.go:632] Waited for 196.352003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:28.598804   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m02
	I0913 23:56:28.598809   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.598816   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.598820   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.603244   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:28.603711   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:28.603735   25213 pod_ready.go:82] duration metric: took 401.50969ms for pod "kube-scheduler-ha-817269-m02" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.603747   25213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:28.797891   25213 request.go:632] Waited for 194.053174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m03
	I0913 23:56:28.797948   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-817269-m03
	I0913 23:56:28.797954   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.797961   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.797964   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:28.802684   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:28.998666   25213 request.go:632] Waited for 195.361149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:28.998746   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-817269-m03
	I0913 23:56:28.998755   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:28.998763   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:28.998767   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.002267   25213 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0913 23:56:29.002892   25213 pod_ready.go:93] pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace has status "Ready":"True"
	I0913 23:56:29.002908   25213 pod_ready.go:82] duration metric: took 399.155646ms for pod "kube-scheduler-ha-817269-m03" in "kube-system" namespace to be "Ready" ...
	I0913 23:56:29.002919   25213 pod_ready.go:39] duration metric: took 5.20118564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:56:29.002932   25213 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:56:29.002982   25213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:56:29.019020   25213 api_server.go:72] duration metric: took 23.105654077s to wait for apiserver process to appear ...
	I0913 23:56:29.019048   25213 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:56:29.019071   25213 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0913 23:56:29.023793   25213 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0913 23:56:29.023865   25213 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0913 23:56:29.023871   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.023878   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.023886   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.024911   25213 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0913 23:56:29.024992   25213 api_server.go:141] control plane version: v1.31.1
	I0913 23:56:29.025004   25213 api_server.go:131] duration metric: took 5.949292ms to wait for apiserver health ...
	I0913 23:56:29.025017   25213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:56:29.198483   25213 request.go:632] Waited for 173.392668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.198563   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.198569   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.198577   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.198581   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.204562   25213 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0913 23:56:29.212258   25213 system_pods.go:59] 24 kube-system pods found
	I0913 23:56:29.212292   25213 system_pods.go:61] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:56:29.212297   25213 system_pods.go:61] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:56:29.212301   25213 system_pods.go:61] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:56:29.212305   25213 system_pods.go:61] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:56:29.212313   25213 system_pods.go:61] "etcd-ha-817269-m03" [d9e93af2-0a01-46eb-8ccd-09b9f3bb8976] Running
	I0913 23:56:29.212317   25213 system_pods.go:61] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:56:29.212320   25213 system_pods.go:61] "kindnet-np2s8" [97c0d537-4460-47f7-8248-1e9445ac27bd] Running
	I0913 23:56:29.212323   25213 system_pods.go:61] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:56:29.212326   25213 system_pods.go:61] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:56:29.212330   25213 system_pods.go:61] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:56:29.212333   25213 system_pods.go:61] "kube-apiserver-ha-817269-m03" [58c8463c-880c-4e4a-b4f8-1460801fab06] Running
	I0913 23:56:29.212337   25213 system_pods.go:61] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:56:29.212340   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:56:29.212345   25213 system_pods.go:61] "kube-controller-manager-ha-817269-m03" [aa8cf8e9-cafe-46cc-aa22-3c188fd160fc] Running
	I0913 23:56:29.212350   25213 system_pods.go:61] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:56:29.212354   25213 system_pods.go:61] "kube-proxy-bwr6g" [256835a2-a848-4572-9e9f-e99350c07ed2] Running
	I0913 23:56:29.212358   25213 system_pods.go:61] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:56:29.212363   25213 system_pods.go:61] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:56:29.212368   25213 system_pods.go:61] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:56:29.212373   25213 system_pods.go:61] "kube-scheduler-ha-817269-m03" [2dd97d6a-9b14-41e2-bf07-628073272e6d] Running
	I0913 23:56:29.212381   25213 system_pods.go:61] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:56:29.212387   25213 system_pods.go:61] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:56:29.212395   25213 system_pods.go:61] "kube-vip-ha-817269-m03" [e50f8baf-d5d0-4534-b1ce-eb76b23764f7] Running
	I0913 23:56:29.212401   25213 system_pods.go:61] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:56:29.212408   25213 system_pods.go:74] duration metric: took 187.384291ms to wait for pod list to return data ...
	I0913 23:56:29.212419   25213 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:56:29.397869   25213 request.go:632] Waited for 185.3661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:56:29.397927   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0913 23:56:29.397932   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.397939   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.397944   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.402445   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:29.402564   25213 default_sa.go:45] found service account: "default"
	I0913 23:56:29.402580   25213 default_sa.go:55] duration metric: took 190.156097ms for default service account to be created ...
	I0913 23:56:29.402589   25213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:56:29.597876   25213 request.go:632] Waited for 195.226759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.597941   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0913 23:56:29.597949   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.597959   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.597965   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.604837   25213 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0913 23:56:29.610983   25213 system_pods.go:86] 24 kube-system pods found
	I0913 23:56:29.611013   25213 system_pods.go:89] "coredns-7c65d6cfc9-mwpbw" [e19eb0be-8e26-4e88-824e-aaec9360bf6c] Running
	I0913 23:56:29.611019   25213 system_pods.go:89] "coredns-7c65d6cfc9-rq5pv" [34cd12c1-d279-4067-a290-be3af39ddf20] Running
	I0913 23:56:29.611023   25213 system_pods.go:89] "etcd-ha-817269" [177764d9-c35b-4c76-916c-3e0f05f2913f] Running
	I0913 23:56:29.611027   25213 system_pods.go:89] "etcd-ha-817269-m02" [5713830a-1aa6-4972-bdb1-2fa0037e5daf] Running
	I0913 23:56:29.611031   25213 system_pods.go:89] "etcd-ha-817269-m03" [d9e93af2-0a01-46eb-8ccd-09b9f3bb8976] Running
	I0913 23:56:29.611035   25213 system_pods.go:89] "kindnet-dxj2g" [5dd2f191-9de6-498e-9d86-7a355340f4a6] Running
	I0913 23:56:29.611038   25213 system_pods.go:89] "kindnet-np2s8" [97c0d537-4460-47f7-8248-1e9445ac27bd] Running
	I0913 23:56:29.611042   25213 system_pods.go:89] "kindnet-qcfqk" [0f37c731-491a-49fb-baea-534818fc8172] Running
	I0913 23:56:29.611046   25213 system_pods.go:89] "kube-apiserver-ha-817269" [b2450ddc-c45b-4238-80f2-74cfd302219c] Running
	I0913 23:56:29.611052   25213 system_pods.go:89] "kube-apiserver-ha-817269-m02" [b5c74a8a-5fef-4a85-b983-3e370828d2c3] Running
	I0913 23:56:29.611056   25213 system_pods.go:89] "kube-apiserver-ha-817269-m03" [58c8463c-880c-4e4a-b4f8-1460801fab06] Running
	I0913 23:56:29.611062   25213 system_pods.go:89] "kube-controller-manager-ha-817269" [483f5cea-02b5-4413-980c-1a788d4b7180] Running
	I0913 23:56:29.611065   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m02" [2acdb65f-d61f-4214-a05f-93065c600c91] Running
	I0913 23:56:29.611069   25213 system_pods.go:89] "kube-controller-manager-ha-817269-m03" [aa8cf8e9-cafe-46cc-aa22-3c188fd160fc] Running
	I0913 23:56:29.611073   25213 system_pods.go:89] "kube-proxy-7t9b2" [edc48f0a-12b6-4712-9e4f-87852a4adefd] Running
	I0913 23:56:29.611076   25213 system_pods.go:89] "kube-proxy-bwr6g" [256835a2-a848-4572-9e9f-e99350c07ed2] Running
	I0913 23:56:29.611080   25213 system_pods.go:89] "kube-proxy-p9lkl" [cf9b3ec9-8ac8-468c-887e-3b572646d4db] Running
	I0913 23:56:29.611084   25213 system_pods.go:89] "kube-scheduler-ha-817269" [3559400f-4422-4156-84d6-c14d8e463122] Running
	I0913 23:56:29.611091   25213 system_pods.go:89] "kube-scheduler-ha-817269-m02" [d61d2029-9136-4c9e-b46b-2e3f019475a9] Running
	I0913 23:56:29.611095   25213 system_pods.go:89] "kube-scheduler-ha-817269-m03" [2dd97d6a-9b14-41e2-bf07-628073272e6d] Running
	I0913 23:56:29.611099   25213 system_pods.go:89] "kube-vip-ha-817269" [1fda5312-9aa8-4ab9-b2db-178289f09fd1] Running
	I0913 23:56:29.611136   25213 system_pods.go:89] "kube-vip-ha-817269-m02" [be2cb069-f099-454e-aaa5-81c41d41ba4c] Running
	I0913 23:56:29.611146   25213 system_pods.go:89] "kube-vip-ha-817269-m03" [e50f8baf-d5d0-4534-b1ce-eb76b23764f7] Running
	I0913 23:56:29.611150   25213 system_pods.go:89] "storage-provisioner" [cc88d524-adef-4f7a-ae34-c02a9d94b99d] Running
	I0913 23:56:29.611156   25213 system_pods.go:126] duration metric: took 208.562026ms to wait for k8s-apps to be running ...
	I0913 23:56:29.611165   25213 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:56:29.611210   25213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:56:29.626851   25213 system_svc.go:56] duration metric: took 15.678046ms WaitForService to wait for kubelet
	I0913 23:56:29.626887   25213 kubeadm.go:582] duration metric: took 23.713525989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:56:29.626909   25213 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:56:29.798234   25213 request.go:632] Waited for 171.245269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0913 23:56:29.798313   25213 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0913 23:56:29.798319   25213 round_trippers.go:469] Request Headers:
	I0913 23:56:29.798326   25213 round_trippers.go:473]     Accept: application/json, */*
	I0913 23:56:29.798332   25213 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0913 23:56:29.803161   25213 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0913 23:56:29.804631   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804654   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804664   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804667   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804670   25213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 23:56:29.804673   25213 node_conditions.go:123] node cpu capacity is 2
	I0913 23:56:29.804677   25213 node_conditions.go:105] duration metric: took 177.763156ms to run NodePressure ...
	I0913 23:56:29.804687   25213 start.go:241] waiting for startup goroutines ...
	I0913 23:56:29.804704   25213 start.go:255] writing updated cluster config ...
	I0913 23:56:29.804974   25213 ssh_runner.go:195] Run: rm -f paused
	I0913 23:56:29.859662   25213 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:56:29.861836   25213 out.go:177] * Done! kubectl is now configured to use "ha-817269" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.107893283Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-5cbmn,Uid:e288c7d7-36f3-4fd1-a944-403098141304,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271791165946425,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:56:30.842079246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mwpbw,Uid:e19eb0be-8e26-4e88-824e-aaec9360bf6c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1726271649243615136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:54:08.899566051Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc88d524-adef-4f7a-ae34-c02a9d94b99d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271649241748960,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T23:54:08.903498000Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rq5pv,Uid:34cd12c1-d279-4067-a290-be3af39ddf20,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1726271649196873739,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:54:08.888588375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&PodSandboxMetadata{Name:kube-proxy-p9lkl,Uid:cf9b3ec9-8ac8-468c-887e-3b572646d4db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271636899258050,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-13T23:53:56.574467871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&PodSandboxMetadata{Name:kindnet-dxj2g,Uid:5dd2f191-9de6-498e-9d86-7a355340f4a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271636893420490,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:53:56.582644109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-817269,Uid:eda3685dd3d4be5c5da91818ed6f5c19,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271625767362317,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eda3685dd3d4be5c5da91818ed6f5c19,kubernetes.io/config.seen: 2024-09-13T23:53:45.265732927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-817269,Uid:54df01655d467a857baf090852a9c527,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271625763550943,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{kube
rnetes.io/config.hash: 54df01655d467a857baf090852a9c527,kubernetes.io/config.seen: 2024-09-13T23:53:45.265735148Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-817269,Uid:0c577b2f163a5153f09183c3f12f62cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271625759554870,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c577b2f163a5153f09183c3f12f62cf,kubernetes.io/config.seen: 2024-09-13T23:53:45.265734288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&PodSandboxMetadata{Name:etcd-ha-817269,Uid:ed7dba6f
f1cb1dff87cc0fe9bba89894,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271625755082209,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.132:2379,kubernetes.io/config.hash: ed7dba6ff1cb1dff87cc0fe9bba89894,kubernetes.io/config.seen: 2024-09-13T23:53:45.265727629Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-817269,Uid:cbd5cb5db01522f88f4d8c5e21684ad5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726271625743820726,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.132:8443,kubernetes.io/config.hash: cbd5cb5db01522f88f4d8c5e21684ad5,kubernetes.io/config.seen: 2024-09-13T23:53:45.265731615Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ee19ae19-5433-4828-b3db-86f3de8692d4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.109080626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d254b61-8e98-4f7a-8d4f-3d275c9fb490 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.109216613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d254b61-8e98-4f7a-8d4f-3d275c9fb490 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.109439801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d254b61-8e98-4f7a-8d4f-3d275c9fb490 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.117644143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1d1626a-817c-487a-b7dd-1b61c6196746 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.117719987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1d1626a-817c-487a-b7dd-1b61c6196746 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.118883264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=230a21e1-44b2-44bc-b3d3-8f095a574e42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.119381155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272065119353456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230a21e1-44b2-44bc-b3d3-8f095a574e42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.120530902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03e341f9-6147-4a4d-9570-53418920c42b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.120585310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03e341f9-6147-4a4d-9570-53418920c42b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.121179995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03e341f9-6147-4a4d-9570-53418920c42b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.158170057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b2e0c58-55de-41d0-b3f5-d88760702b62 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.158247505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b2e0c58-55de-41d0-b3f5-d88760702b62 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.159339220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49da1264-5eb0-47dc-9d33-52b448c0b81a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.159823902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272065159795434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49da1264-5eb0-47dc-9d33-52b448c0b81a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.160498759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=746e7902-2979-4b8e-b714-f15499f5a935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.160571040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=746e7902-2979-4b8e-b714-f15499f5a935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.160787756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=746e7902-2979-4b8e-b714-f15499f5a935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.201399532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3049fd38-fc54-4d72-9959-96233b1809bf name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.201473669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3049fd38-fc54-4d72-9959-96233b1809bf name=/runtime.v1.RuntimeService/Version
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.202748267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e87dd9d-eaf1-4a0c-b99f-e218a515ca5b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.203226947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272065203201774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e87dd9d-eaf1-4a0c-b99f-e218a515ca5b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.203802828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43cb98e6-7e82-4d67-b810-0bdf9d5b7133 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.204002467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43cb98e6-7e82-4d67-b810-0bdf9d5b7133 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:01:05 ha-817269 crio[664]: time="2024-09-14 00:01:05.204305210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726271794970179345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649511800372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726271649512620715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead,PodSandboxId:4d00c26a0280114798d2c3576de8afc77129cfe8541367f3e90aeceabf125c29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726271649398349870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17262716
37501595570,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726271637231962559,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5,PodSandboxId:7f935f0bca02ae8f6b9d78f58346333a953fdf2a5f62d7c12659213906a789e3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726271628623172534,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54df01655d467a857baf090852a9c527,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726271626009900050,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726271625986497215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08,PodSandboxId:9f7ec2e6fa8fd5c61e1996e1843c5c73f2df356c3d5690bdb3ad96bafbd754c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726271625942710514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96,PodSandboxId:07ce99ad32595cc33cb576cb8ded69fb28e9f073ae56ab85c6b0f8b15c46330f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726271625958549372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43cb98e6-7e82-4d67-b810-0bdf9d5b7133 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c3d244ad4c30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   2ff13b6745379       busybox-7dff88458-5cbmn
	61abb6eb65e46       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   36c20ca07db88       coredns-7c65d6cfc9-rq5pv
	4ce76346be5b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8b315def4f628       coredns-7c65d6cfc9-mwpbw
	315adcde5c56f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4d00c26a02801       storage-provisioner
	b992c3b895609       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   f453fe4fb77a3       kindnet-dxj2g
	f8f2322f127fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   babdf5981ec86       kube-proxy-p9lkl
	2faad36b3b9a3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   7f935f0bca02a       kube-vip-ha-817269
	45371c7b7dce4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   eccbef0ef4d20       kube-scheduler-ha-817269
	33ac2ce16b58b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   0ea1c016c25f7       etcd-ha-817269
	a72c7ed6fd0b9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   07ce99ad32595       kube-controller-manager-ha-817269
	11c2a11c941f9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   9f7ec2e6fa8fd       kube-apiserver-ha-817269
	
	
	==> coredns [4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94] <==
	[INFO] 10.244.0.4:55927 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000399545s
	[INFO] 10.244.0.4:49919 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003496351s
	[INFO] 10.244.0.4:46401 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250317s
	[INFO] 10.244.0.4:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278308s
	[INFO] 10.244.2.2:47599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000288668s
	[INFO] 10.244.2.2:53222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165792s
	[INFO] 10.244.2.2:51300 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207369s
	[INFO] 10.244.2.2:56912 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110533s
	[INFO] 10.244.2.2:37804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204459s
	[INFO] 10.244.1.2:54436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226539s
	[INFO] 10.244.1.2:56082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001819826s
	[INFO] 10.244.1.2:58316 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222276s
	[INFO] 10.244.1.2:42306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083319s
	[INFO] 10.244.0.4:53876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020116s
	[INFO] 10.244.0.4:56768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013293s
	[INFO] 10.244.0.4:47653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.0.4:50365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154019s
	[INFO] 10.244.2.2:56862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195398s
	[INFO] 10.244.2.2:40784 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189124s
	[INFO] 10.244.2.2:42797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106937s
	[INFO] 10.244.1.2:49876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246067s
	[INFO] 10.244.0.4:44026 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299901s
	[INFO] 10.244.0.4:40123 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000233032s
	[INFO] 10.244.1.2:42204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000500811s
	[INFO] 10.244.1.2:44587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205062s
	
	
	==> coredns [61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997] <==
	[INFO] 10.244.1.2:46173 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000159167s
	[INFO] 10.244.1.2:57795 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000608214s
	[INFO] 10.244.0.4:58344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020088s
	[INFO] 10.244.0.4:39998 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005418524s
	[INFO] 10.244.0.4:57052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284617s
	[INFO] 10.244.0.4:59585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149604s
	[INFO] 10.244.2.2:44013 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0019193s
	[INFO] 10.244.2.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022048s
	[INFO] 10.244.2.2:33172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513908s
	[INFO] 10.244.1.2:35965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790224s
	[INFO] 10.244.1.2:42555 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000321828s
	[INFO] 10.244.1.2:54761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123494s
	[INFO] 10.244.1.2:51742 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176208s
	[INFO] 10.244.2.2:55439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172115s
	[INFO] 10.244.1.2:32823 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000209293s
	[INFO] 10.244.1.2:54911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191869s
	[INFO] 10.244.1.2:45538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090559s
	[INFO] 10.244.0.4:51099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293009s
	[INFO] 10.244.0.4:52402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204563s
	[INFO] 10.244.2.2:48710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318957s
	[INFO] 10.244.2.2:51855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124089s
	[INFO] 10.244.2.2:54763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000257295s
	[INFO] 10.244.2.2:56836 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186617s
	[INFO] 10.244.1.2:45824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223466s
	[INFO] 10.244.1.2:32974 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143816s
	
	
	==> describe nodes <==
	Name:               ha-817269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:53:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:01:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:56:56 +0000   Fri, 13 Sep 2024 23:54:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-817269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc026746bcc47d49a7f508137c16c0a
	  System UUID:                0bc02674-6bcc-47d4-9a7f-508137c16c0a
	  Boot ID:                    1a383d96-7a2a-4a67-94ca-0f262bc14568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5cbmn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 coredns-7c65d6cfc9-mwpbw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m9s
	  kube-system                 coredns-7c65d6cfc9-rq5pv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m9s
	  kube-system                 etcd-ha-817269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m13s
	  kube-system                 kindnet-dxj2g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m9s
	  kube-system                 kube-apiserver-ha-817269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-controller-manager-ha-817269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-p9lkl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-scheduler-ha-817269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-vip-ha-817269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m7s   kube-proxy       
	  Normal  Starting                 7m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s  kubelet          Node ha-817269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet          Node ha-817269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet          Node ha-817269 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m10s  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal  NodeReady                6m57s  kubelet          Node ha-817269 status is now: NodeReady
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal  RegisteredNode           4m55s  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	
	
	Name:               ha-817269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:54:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:57:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Sep 2024 23:56:48 +0000   Fri, 13 Sep 2024 23:58:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-817269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 260fc9ca7fe3421fbf6de250d4218230
	  System UUID:                260fc9ca-7fe3-421f-bf6d-e250d4218230
	  Boot ID:                    5829ad79-34f1-4783-8856-f43f06d412e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wff9f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 etcd-ha-817269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-qcfqk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-817269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-817269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-7t9b2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-817269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-817269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x7 over 6m19s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  NodeNotReady             2m45s                  node-controller  Node ha-817269-m02 status is now: NodeNotReady
	
	
	Name:               ha-817269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_56_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:56:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:00:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:57:03 +0000   Fri, 13 Sep 2024 23:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-817269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cd9a23c8c734501a4ad2e1089d5fd49
	  System UUID:                7cd9a23c-8c73-4501-a4ad-2e1089d5fd49
	  Boot ID:                    85dd8157-d1db-4702-87e2-60247276cb9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vsts4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 etcd-ha-817269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m2s
	  kube-system                 kindnet-np2s8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m3s
	  kube-system                 kube-apiserver-ha-817269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-ha-817269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-bwr6g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-scheduler-ha-817269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-vip-ha-817269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m4s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m4s)  kubelet          Node ha-817269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m4s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal  RegisteredNode           4m55s                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	
	
	Name:               ha-817269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_57_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:57:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:01:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:57:39 +0000   Fri, 13 Sep 2024 23:57:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-817269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5153efd89a4042b8870c772e8476a0
	  System UUID:                ca5153ef-d89a-4042-b887-0c772e8476a0
	  Boot ID:                    5a56c1c7-47d1-459d-93e6-f87cc04e73b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-45h44       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m57s
	  kube-system                 kube-proxy-b8pch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x2 over 3m57s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x2 over 3m57s)  kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x2 over 3m57s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node ha-817269-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep13 23:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051672] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037846] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798368] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951398] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.553139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.741095] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.067049] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057264] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.183859] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.112834] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.262666] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.811628] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.142511] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066169] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.376137] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.080016] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.054020] kauditd_printk_skb: 26 callbacks suppressed
	[Sep13 23:54] kauditd_printk_skb: 35 callbacks suppressed
	[ +43.648430] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a] <==
	{"level":"warn","ts":"2024-09-14T00:01:05.423174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.465337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.476289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.480520Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.490079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.497246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.503801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.507184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.510761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.517042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.523851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.524182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.530915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.536755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.540670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.546471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.552288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.558205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.561621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.564673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.568249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.574339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.584351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.620920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:01:05.623306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:01:05 up 7 min,  0 users,  load average: 0.31, 0.25, 0.11
	Linux ha-817269 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e] <==
	I0914 00:00:28.534686       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:00:38.535195       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:00:38.535321       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:00:38.535569       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:00:38.535596       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:00:38.535706       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:00:38.535727       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:00:38.535790       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:00:38.535798       1 main.go:299] handling current node
	I0914 00:00:48.541482       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:00:48.541554       1 main.go:299] handling current node
	I0914 00:00:48.541579       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:00:48.541604       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:00:48.541790       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:00:48.541835       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:00:48.541928       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:00:48.541955       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:00:58.534279       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:00:58.534441       1 main.go:299] handling current node
	I0914 00:00:58.534490       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:00:58.534514       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:00:58.534651       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:00:58.534673       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:00:58.534763       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:00:58.534784       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08] <==
	I0913 23:53:50.984469       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0913 23:53:50.993069       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132]
	I0913 23:53:50.994839       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 23:53:51.001266       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 23:53:51.102551       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 23:53:52.262937       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 23:53:52.284367       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0913 23:53:52.457740       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 23:53:56.505781       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0913 23:53:56.770308       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0913 23:56:35.798668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49360: use of closed network connection
	E0913 23:56:35.984574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49376: use of closed network connection
	E0913 23:56:36.173865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49384: use of closed network connection
	E0913 23:56:36.369578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49404: use of closed network connection
	E0913 23:56:36.550301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49412: use of closed network connection
	E0913 23:56:36.730848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49424: use of closed network connection
	E0913 23:56:36.917966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49456: use of closed network connection
	E0913 23:56:37.115026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49472: use of closed network connection
	E0913 23:56:37.301864       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49490: use of closed network connection
	E0913 23:56:37.604065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49532: use of closed network connection
	E0913 23:56:37.806772       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49550: use of closed network connection
	E0913 23:56:37.996521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49568: use of closed network connection
	E0913 23:56:38.167786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49584: use of closed network connection
	E0913 23:56:38.338726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49602: use of closed network connection
	E0913 23:56:38.511919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49630: use of closed network connection
	
	
	==> kube-controller-manager [a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96] <==
	I0913 23:57:08.650792       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-817269-m04\" does not exist"
	I0913 23:57:08.710055       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-817269-m04" podCIDRs=["10.244.3.0/24"]
	I0913 23:57:08.710171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:08.710261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:08.885890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:09.309005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:09.838892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.833728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.900896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.953932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:10.954211       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-817269-m04"
	I0913 23:57:11.016775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:19.068785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.630779       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0913 23:57:29.630879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.645592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:29.833375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:57:39.690871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0913 23:58:20.863075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0913 23:58:20.863443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:20.885488       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:21.004518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.235766ms"
	I0913 23:58:21.004934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.508µs"
	I0913 23:58:21.012888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0913 23:58:26.184970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	
	
	==> kube-proxy [f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:53:57.656514       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:53:57.683604       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
	E0913 23:53:57.683885       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:53:57.722667       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:53:57.722712       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:53:57.722736       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:53:57.725734       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:53:57.726491       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:53:57.726520       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:53:57.729944       1 config.go:199] "Starting service config controller"
	I0913 23:53:57.730942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:53:57.731195       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:53:57.731248       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:53:57.735705       1 config.go:328] "Starting node config controller"
	I0913 23:53:57.735729       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:53:57.832204       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:53:57.832244       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:53:57.835785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5] <==
	W0913 23:53:50.269686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 23:53:50.270023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 23:53:50.384172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 23:53:50.384266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 23:53:52.456288       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 23:56:02.084456       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-np2s8\": pod kindnet-np2s8 is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-np2s8" node="ha-817269-m03"
	E0913 23:56:02.084784       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bwr6g\": pod kube-proxy-bwr6g is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bwr6g" node="ha-817269-m03"
	E0913 23:56:02.084831       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-np2s8\": pod kindnet-np2s8 is already assigned to node \"ha-817269-m03\"" pod="kube-system/kindnet-np2s8"
	E0913 23:56:02.084950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 256835a2-a848-4572-9e9f-e99350c07ed2(kube-system/kube-proxy-bwr6g) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bwr6g"
	E0913 23:56:02.084999       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bwr6g\": pod kube-proxy-bwr6g is already assigned to node \"ha-817269-m03\"" pod="kube-system/kube-proxy-bwr6g"
	I0913 23:56:02.085031       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bwr6g" node="ha-817269-m03"
	E0913 23:56:30.809387       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vsts4\": pod busybox-7dff88458-vsts4 is already assigned to node \"ha-817269-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vsts4" node="ha-817269-m03"
	E0913 23:56:30.813264       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5d1a6d17-44a4-4b61-b86f-4455a16dee23(default/busybox-7dff88458-vsts4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vsts4"
	E0913 23:56:30.814009       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vsts4\": pod busybox-7dff88458-vsts4 is already assigned to node \"ha-817269-m03\"" pod="default/busybox-7dff88458-vsts4"
	I0913 23:56:30.814255       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vsts4" node="ha-817269-m03"
	E0913 23:56:30.847165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wff9f" node="ha-817269-m02"
	E0913 23:56:30.847268       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" pod="default/busybox-7dff88458-wff9f"
	E0913 23:56:30.906194       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:56:30.906282       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e288c7d7-36f3-4fd1-a944-403098141304(default/busybox-7dff88458-5cbmn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5cbmn"
	E0913 23:56:30.906305       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" pod="default/busybox-7dff88458-5cbmn"
	I0913 23:56:30.906349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:57:08.751565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0913 23:57:08.751687       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 234c68a0-c2e4-4784-8bda-6c0a1ffc84db(kube-system/kube-proxy-tdcn8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tdcn8"
	E0913 23:57:08.751719       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" pod="kube-system/kube-proxy-tdcn8"
	I0913 23:57:08.751751       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	
	
	==> kubelet <==
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 23:59:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 23:59:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 23:59:52 ha-817269 kubelet[1306]: E0913 23:59:52.511505    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271992511060583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 13 23:59:52 ha-817269 kubelet[1306]: E0913 23:59:52.511540    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726271992511060583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:02 ha-817269 kubelet[1306]: E0914 00:00:02.513470    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272002513163095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:02 ha-817269 kubelet[1306]: E0914 00:00:02.513775    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272002513163095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:12 ha-817269 kubelet[1306]: E0914 00:00:12.515522    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272012515081580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:12 ha-817269 kubelet[1306]: E0914 00:00:12.515793    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272012515081580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:22 ha-817269 kubelet[1306]: E0914 00:00:22.516897    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272022516634954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:22 ha-817269 kubelet[1306]: E0914 00:00:22.517221    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272022516634954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:32 ha-817269 kubelet[1306]: E0914 00:00:32.519004    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272032518588679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:32 ha-817269 kubelet[1306]: E0914 00:00:32.519063    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272032518588679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:42 ha-817269 kubelet[1306]: E0914 00:00:42.520277    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272042519918154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:42 ha-817269 kubelet[1306]: E0914 00:00:42.520341    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272042519918154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:52 ha-817269 kubelet[1306]: E0914 00:00:52.379187    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:00:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:00:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:00:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:00:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:00:52 ha-817269 kubelet[1306]: E0914 00:00:52.522155    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272052521769695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:00:52 ha-817269 kubelet[1306]: E0914 00:00:52.522255    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272052521769695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:01:02 ha-817269 kubelet[1306]: E0914 00:01:02.526486    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272062526205395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:01:02 ha-817269 kubelet[1306]: E0914 00:01:02.526510    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272062526205395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-817269 -n ha-817269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-817269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-817269 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-817269 -v=7 --alsologtostderr
E0914 00:02:20.624279   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:02:48.326811   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-817269 -v=7 --alsologtostderr: exit status 82 (2m1.779272852s)

                                                
                                                
-- stdout --
	* Stopping node "ha-817269-m04"  ...
	* Stopping node "ha-817269-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:01:07.036990   30955 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:01:07.037083   30955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:01:07.037091   30955 out.go:358] Setting ErrFile to fd 2...
	I0914 00:01:07.037095   30955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:01:07.037255   30955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:01:07.037454   30955 out.go:352] Setting JSON to false
	I0914 00:01:07.037532   30955 mustload.go:65] Loading cluster: ha-817269
	I0914 00:01:07.037917   30955 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:01:07.038000   30955 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0914 00:01:07.038166   30955 mustload.go:65] Loading cluster: ha-817269
	I0914 00:01:07.038294   30955 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:01:07.038317   30955 stop.go:39] StopHost: ha-817269-m04
	I0914 00:01:07.038681   30955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:07.038717   30955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:07.053897   30955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0914 00:01:07.054333   30955 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:07.054863   30955 main.go:141] libmachine: Using API Version  1
	I0914 00:01:07.054887   30955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:07.055210   30955 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:07.058261   30955 out.go:177] * Stopping node "ha-817269-m04"  ...
	I0914 00:01:07.059336   30955 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:01:07.059368   30955 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:01:07.059656   30955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:01:07.059680   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:01:07.062747   30955 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:07.063279   30955 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:56:53 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:01:07.063299   30955 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:01:07.063457   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:01:07.063673   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:01:07.063842   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:01:07.063979   30955 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:01:07.146487   30955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:01:07.199286   30955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:01:07.253950   30955 main.go:141] libmachine: Stopping "ha-817269-m04"...
	I0914 00:01:07.253975   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:01:07.255535   30955 main.go:141] libmachine: (ha-817269-m04) Calling .Stop
	I0914 00:01:07.258847   30955 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 0/120
	I0914 00:01:08.334391   30955 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:01:08.335778   30955 main.go:141] libmachine: Machine "ha-817269-m04" was stopped.
	I0914 00:01:08.335828   30955 stop.go:75] duration metric: took 1.276492637s to stop
	I0914 00:01:08.335850   30955 stop.go:39] StopHost: ha-817269-m03
	I0914 00:01:08.336127   30955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:01:08.336168   30955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:01:08.351213   30955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0914 00:01:08.351762   30955 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:01:08.352488   30955 main.go:141] libmachine: Using API Version  1
	I0914 00:01:08.352531   30955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:01:08.352902   30955 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:01:08.354713   30955 out.go:177] * Stopping node "ha-817269-m03"  ...
	I0914 00:01:08.356084   30955 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:01:08.356113   30955 main.go:141] libmachine: (ha-817269-m03) Calling .DriverName
	I0914 00:01:08.356320   30955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:01:08.356346   30955 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHHostname
	I0914 00:01:08.359609   30955 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:08.360095   30955 main.go:141] libmachine: (ha-817269-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:06", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:55:25 +0000 UTC Type:0 Mac:52:54:00:61:13:06 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-817269-m03 Clientid:01:52:54:00:61:13:06}
	I0914 00:01:08.360123   30955 main.go:141] libmachine: (ha-817269-m03) DBG | domain ha-817269-m03 has defined IP address 192.168.39.68 and MAC address 52:54:00:61:13:06 in network mk-ha-817269
	I0914 00:01:08.360258   30955 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHPort
	I0914 00:01:08.360453   30955 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHKeyPath
	I0914 00:01:08.360725   30955 main.go:141] libmachine: (ha-817269-m03) Calling .GetSSHUsername
	I0914 00:01:08.360855   30955 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m03/id_rsa Username:docker}
	I0914 00:01:08.458408   30955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:01:08.514148   30955 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:01:08.570129   30955 main.go:141] libmachine: Stopping "ha-817269-m03"...
	I0914 00:01:08.570163   30955 main.go:141] libmachine: (ha-817269-m03) Calling .GetState
	I0914 00:01:08.571657   30955 main.go:141] libmachine: (ha-817269-m03) Calling .Stop
	I0914 00:01:08.574881   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 0/120
	I0914 00:01:09.576661   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 1/120
	I0914 00:01:10.578048   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 2/120
	I0914 00:01:11.579663   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 3/120
	I0914 00:01:12.581058   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 4/120
	I0914 00:01:13.582998   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 5/120
	I0914 00:01:14.584674   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 6/120
	I0914 00:01:15.586215   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 7/120
	I0914 00:01:16.588610   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 8/120
	I0914 00:01:17.589928   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 9/120
	I0914 00:01:18.591475   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 10/120
	I0914 00:01:19.593089   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 11/120
	I0914 00:01:20.594453   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 12/120
	I0914 00:01:21.595816   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 13/120
	I0914 00:01:22.597256   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 14/120
	I0914 00:01:23.599106   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 15/120
	I0914 00:01:24.600649   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 16/120
	I0914 00:01:25.602010   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 17/120
	I0914 00:01:26.603483   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 18/120
	I0914 00:01:27.604661   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 19/120
	I0914 00:01:28.606060   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 20/120
	I0914 00:01:29.607410   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 21/120
	I0914 00:01:30.608791   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 22/120
	I0914 00:01:31.610137   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 23/120
	I0914 00:01:32.611635   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 24/120
	I0914 00:01:33.613552   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 25/120
	I0914 00:01:34.615622   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 26/120
	I0914 00:01:35.616906   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 27/120
	I0914 00:01:36.618541   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 28/120
	I0914 00:01:37.619883   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 29/120
	I0914 00:01:38.622128   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 30/120
	I0914 00:01:39.623815   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 31/120
	I0914 00:01:40.625114   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 32/120
	I0914 00:01:41.627181   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 33/120
	I0914 00:01:42.628974   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 34/120
	I0914 00:01:43.630690   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 35/120
	I0914 00:01:44.632088   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 36/120
	I0914 00:01:45.633777   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 37/120
	I0914 00:01:46.635116   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 38/120
	I0914 00:01:47.636452   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 39/120
	I0914 00:01:48.638162   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 40/120
	I0914 00:01:49.639643   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 41/120
	I0914 00:01:50.640921   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 42/120
	I0914 00:01:51.642335   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 43/120
	I0914 00:01:52.643779   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 44/120
	I0914 00:01:53.645641   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 45/120
	I0914 00:01:54.647023   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 46/120
	I0914 00:01:55.648723   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 47/120
	I0914 00:01:56.650159   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 48/120
	I0914 00:01:57.651480   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 49/120
	I0914 00:01:58.653179   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 50/120
	I0914 00:01:59.654677   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 51/120
	I0914 00:02:00.656255   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 52/120
	I0914 00:02:01.657673   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 53/120
	I0914 00:02:02.658849   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 54/120
	I0914 00:02:03.660689   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 55/120
	I0914 00:02:04.661941   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 56/120
	I0914 00:02:05.663266   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 57/120
	I0914 00:02:06.664646   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 58/120
	I0914 00:02:07.666366   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 59/120
	I0914 00:02:08.668810   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 60/120
	I0914 00:02:09.670314   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 61/120
	I0914 00:02:10.671742   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 62/120
	I0914 00:02:11.673134   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 63/120
	I0914 00:02:12.674477   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 64/120
	I0914 00:02:13.676508   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 65/120
	I0914 00:02:14.678333   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 66/120
	I0914 00:02:15.680300   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 67/120
	I0914 00:02:16.681797   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 68/120
	I0914 00:02:17.683116   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 69/120
	I0914 00:02:18.685483   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 70/120
	I0914 00:02:19.686760   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 71/120
	I0914 00:02:20.688375   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 72/120
	I0914 00:02:21.690049   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 73/120
	I0914 00:02:22.692217   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 74/120
	I0914 00:02:23.693996   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 75/120
	I0914 00:02:24.695575   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 76/120
	I0914 00:02:25.697314   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 77/120
	I0914 00:02:26.698810   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 78/120
	I0914 00:02:27.700281   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 79/120
	I0914 00:02:28.701771   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 80/120
	I0914 00:02:29.703002   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 81/120
	I0914 00:02:30.705121   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 82/120
	I0914 00:02:31.706323   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 83/120
	I0914 00:02:32.707886   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 84/120
	I0914 00:02:33.709672   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 85/120
	I0914 00:02:34.711109   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 86/120
	I0914 00:02:35.712549   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 87/120
	I0914 00:02:36.713873   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 88/120
	I0914 00:02:37.715128   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 89/120
	I0914 00:02:38.717073   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 90/120
	I0914 00:02:39.718454   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 91/120
	I0914 00:02:40.719832   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 92/120
	I0914 00:02:41.721334   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 93/120
	I0914 00:02:42.722883   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 94/120
	I0914 00:02:43.724628   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 95/120
	I0914 00:02:44.726079   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 96/120
	I0914 00:02:45.727740   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 97/120
	I0914 00:02:46.729325   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 98/120
	I0914 00:02:47.730757   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 99/120
	I0914 00:02:48.732837   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 100/120
	I0914 00:02:49.734232   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 101/120
	I0914 00:02:50.735560   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 102/120
	I0914 00:02:51.737030   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 103/120
	I0914 00:02:52.738603   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 104/120
	I0914 00:02:53.740774   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 105/120
	I0914 00:02:54.742208   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 106/120
	I0914 00:02:55.743721   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 107/120
	I0914 00:02:56.745040   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 108/120
	I0914 00:02:57.746402   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 109/120
	I0914 00:02:58.748209   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 110/120
	I0914 00:02:59.749884   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 111/120
	I0914 00:03:00.751473   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 112/120
	I0914 00:03:01.752836   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 113/120
	I0914 00:03:02.754085   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 114/120
	I0914 00:03:03.755517   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 115/120
	I0914 00:03:04.757257   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 116/120
	I0914 00:03:05.758797   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 117/120
	I0914 00:03:06.760158   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 118/120
	I0914 00:03:07.761564   30955 main.go:141] libmachine: (ha-817269-m03) Waiting for machine to stop 119/120
	I0914 00:03:08.762633   30955 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 00:03:08.762675   30955 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 00:03:08.764980   30955 out.go:201] 
	W0914 00:03:08.766279   30955 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 00:03:08.766298   30955 out.go:270] * 
	* 
	W0914 00:03:08.768486   30955 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:03:08.770776   30955 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-817269 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-817269 --wait=true -v=7 --alsologtostderr
E0914 00:04:31.535543   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:05:54.604451   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-817269 --wait=true -v=7 --alsologtostderr: (4m7.549619443s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-817269
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-817269 -n ha-817269
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-817269 logs -n 25: (1.749273107s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m04 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp testdata/cp-test.txt                                               | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m04_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03:/home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m03 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-817269 node stop m02 -v=7                                                    | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-817269 node start m02 -v=7                                                   | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-817269 -v=7                                                          | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-817269 -v=7                                                               | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-817269 --wait=true -v=7                                                   | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:03 UTC | 14 Sep 24 00:07 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-817269                                                               | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:07 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:03:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:03:08.816919   31414 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:03:08.817145   31414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:03:08.817152   31414 out.go:358] Setting ErrFile to fd 2...
	I0914 00:03:08.817156   31414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:03:08.817344   31414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:03:08.817908   31414 out.go:352] Setting JSON to false
	I0914 00:03:08.818813   31414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2735,"bootTime":1726269454,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:03:08.818907   31414 start.go:139] virtualization: kvm guest
	I0914 00:03:08.821258   31414 out.go:177] * [ha-817269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:03:08.822403   31414 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:03:08.822412   31414 notify.go:220] Checking for updates...
	I0914 00:03:08.823615   31414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:03:08.824741   31414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:03:08.825852   31414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:03:08.826725   31414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:03:08.827809   31414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:03:08.829519   31414 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:03:08.829619   31414 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:03:08.830107   31414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:03:08.830157   31414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:03:08.846135   31414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0914 00:03:08.846645   31414 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:03:08.847168   31414 main.go:141] libmachine: Using API Version  1
	I0914 00:03:08.847187   31414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:03:08.847496   31414 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:03:08.847681   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.884738   31414 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:03:08.886032   31414 start.go:297] selected driver: kvm2
	I0914 00:03:08.886050   31414 start.go:901] validating driver "kvm2" against &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:03:08.886192   31414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:03:08.886504   31414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:03:08.886573   31414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:03:08.902015   31414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:03:08.902673   31414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:03:08.902709   31414 cni.go:84] Creating CNI manager for ""
	I0914 00:03:08.902760   31414 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 00:03:08.902816   31414 start.go:340] cluster config:
	{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:03:08.902950   31414 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:03:08.904470   31414 out.go:177] * Starting "ha-817269" primary control-plane node in "ha-817269" cluster
	I0914 00:03:08.905402   31414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:03:08.905432   31414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:03:08.905441   31414 cache.go:56] Caching tarball of preloaded images
	I0914 00:03:08.905524   31414 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:03:08.905644   31414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:03:08.905779   31414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0914 00:03:08.905975   31414 start.go:360] acquireMachinesLock for ha-817269: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:03:08.906034   31414 start.go:364] duration metric: took 40.429µs to acquireMachinesLock for "ha-817269"
	I0914 00:03:08.906053   31414 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:03:08.906063   31414 fix.go:54] fixHost starting: 
	I0914 00:03:08.906345   31414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:03:08.906382   31414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:03:08.920479   31414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0914 00:03:08.920895   31414 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:03:08.921392   31414 main.go:141] libmachine: Using API Version  1
	I0914 00:03:08.921411   31414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:03:08.921701   31414 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:03:08.921860   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.921947   31414 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:03:08.923348   31414 fix.go:112] recreateIfNeeded on ha-817269: state=Running err=<nil>
	W0914 00:03:08.923377   31414 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:03:08.925133   31414 out.go:177] * Updating the running kvm2 "ha-817269" VM ...
	I0914 00:03:08.926089   31414 machine.go:93] provisionDockerMachine start ...
	I0914 00:03:08.926110   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.926302   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:08.928469   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:08.928879   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:08.928911   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:08.929014   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:08.929188   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:08.929328   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:08.929445   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:08.929578   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:08.929758   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:08.929769   31414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:03:09.040884   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0914 00:03:09.040909   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.041157   31414 buildroot.go:166] provisioning hostname "ha-817269"
	I0914 00:03:09.041179   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.041379   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.044145   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.044560   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.044595   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.044724   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.044893   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.045079   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.045190   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.045335   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.045548   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.045573   31414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269 && echo "ha-817269" | sudo tee /etc/hostname
	I0914 00:03:09.172681   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0914 00:03:09.172708   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.175711   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.176134   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.176163   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.176336   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.176527   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.176696   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.176824   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.176977   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.177183   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.177206   31414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:03:09.288798   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:03:09.288836   31414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:03:09.288862   31414 buildroot.go:174] setting up certificates
	I0914 00:03:09.288873   31414 provision.go:84] configureAuth start
	I0914 00:03:09.288886   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.289139   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:03:09.291699   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.292055   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.292082   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.292230   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.294300   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.294613   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.294638   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.294791   31414 provision.go:143] copyHostCerts
	I0914 00:03:09.294816   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:03:09.294852   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:03:09.294863   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:03:09.294936   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:03:09.295026   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:03:09.295049   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:03:09.295059   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:03:09.295095   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:03:09.295157   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:03:09.295182   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:03:09.295191   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:03:09.295230   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:03:09.295314   31414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269 san=[127.0.0.1 192.168.39.132 ha-817269 localhost minikube]
	I0914 00:03:09.377525   31414 provision.go:177] copyRemoteCerts
	I0914 00:03:09.377588   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:03:09.377613   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.380669   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.381037   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.381070   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.381274   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.381486   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.381665   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.381822   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:03:09.467365   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 00:03:09.467444   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:03:09.492394   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 00:03:09.492469   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0914 00:03:09.516523   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 00:03:09.516589   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:03:09.542239   31414 provision.go:87] duration metric: took 253.352545ms to configureAuth
	I0914 00:03:09.542267   31414 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:03:09.542549   31414 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:03:09.542671   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.545457   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.545884   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.545920   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.546054   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.546239   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.546381   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.546501   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.546640   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.546852   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.546872   31414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:04:40.461972   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:04:40.462008   31414 machine.go:96] duration metric: took 1m31.535906473s to provisionDockerMachine
	I0914 00:04:40.462023   31414 start.go:293] postStartSetup for "ha-817269" (driver="kvm2")
	I0914 00:04:40.462037   31414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:04:40.462078   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.462383   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:04:40.462423   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.465839   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.466281   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.466324   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.466465   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.466644   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.466787   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.466955   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.550726   31414 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:04:40.554811   31414 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:04:40.554832   31414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:04:40.554898   31414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:04:40.554987   31414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:04:40.555000   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0914 00:04:40.555104   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:04:40.564108   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:04:40.587088   31414 start.go:296] duration metric: took 125.050457ms for postStartSetup
	I0914 00:04:40.587127   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.587400   31414 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0914 00:04:40.587424   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.589973   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.590385   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.590408   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.590619   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.590791   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.590904   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.591020   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	W0914 00:04:40.674713   31414 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0914 00:04:40.674735   31414 fix.go:56] duration metric: took 1m31.768673407s for fixHost
	I0914 00:04:40.674768   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.677173   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.677479   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.677512   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.677715   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.677866   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.677985   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.678091   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.678204   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:04:40.678378   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:04:40.678399   31414 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:04:40.788409   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726272280.746681996
	
	I0914 00:04:40.788436   31414 fix.go:216] guest clock: 1726272280.746681996
	I0914 00:04:40.788446   31414 fix.go:229] Guest: 2024-09-14 00:04:40.746681996 +0000 UTC Remote: 2024-09-14 00:04:40.674753415 +0000 UTC m=+91.893799601 (delta=71.928581ms)
	I0914 00:04:40.788470   31414 fix.go:200] guest clock delta is within tolerance: 71.928581ms
	I0914 00:04:40.788477   31414 start.go:83] releasing machines lock for "ha-817269", held for 1m31.882431541s
	I0914 00:04:40.788501   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.788779   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:04:40.791546   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.791863   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.791888   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.792026   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792558   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792709   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792810   31414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:04:40.792846   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.792886   31414 ssh_runner.go:195] Run: cat /version.json
	I0914 00:04:40.792908   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.795258   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795507   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795757   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.795798   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795950   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.796084   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.796089   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.796109   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.796286   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.796309   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.796487   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.796511   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.796605   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.796752   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.908562   31414 ssh_runner.go:195] Run: systemctl --version
	I0914 00:04:40.914948   31414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:04:41.075274   31414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:04:41.081015   31414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:04:41.081120   31414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:04:41.090293   31414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 00:04:41.090332   31414 start.go:495] detecting cgroup driver to use...
	I0914 00:04:41.090393   31414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:04:41.107756   31414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:04:41.121864   31414 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:04:41.121914   31414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:04:41.135449   31414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:04:41.148796   31414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:04:41.295237   31414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:04:41.437117   31414 docker.go:233] disabling docker service ...
	I0914 00:04:41.437181   31414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:04:41.454439   31414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:04:41.468440   31414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:04:41.613226   31414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:04:41.759778   31414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:04:41.775373   31414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:04:41.794924   31414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:04:41.794984   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.805666   31414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:04:41.805732   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.816148   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.826366   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.836588   31414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:04:41.847992   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.859087   31414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.871087   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.882727   31414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:04:41.892798   31414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:04:41.902411   31414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:04:42.047830   31414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:04:42.812684   31414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:04:42.812760   31414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:04:42.818791   31414 start.go:563] Will wait 60s for crictl version
	I0914 00:04:42.818836   31414 ssh_runner.go:195] Run: which crictl
	I0914 00:04:42.822330   31414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:04:42.862126   31414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:04:42.862220   31414 ssh_runner.go:195] Run: crio --version
	I0914 00:04:42.890916   31414 ssh_runner.go:195] Run: crio --version
	I0914 00:04:42.919623   31414 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:04:42.920900   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:04:42.923601   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:42.923967   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:42.923995   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:42.924176   31414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:04:42.928787   31414 kubeadm.go:883] updating cluster {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:04:42.928923   31414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:04:42.928989   31414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:04:42.971731   31414 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:04:42.971755   31414 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:04:42.971828   31414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:04:43.010560   31414 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:04:43.010587   31414 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:04:43.010595   31414 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.31.1 crio true true} ...
	I0914 00:04:43.010688   31414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:04:43.010752   31414 ssh_runner.go:195] Run: crio config
	I0914 00:04:43.057397   31414 cni.go:84] Creating CNI manager for ""
	I0914 00:04:43.057421   31414 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 00:04:43.057433   31414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:04:43.057452   31414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-817269 NodeName:ha-817269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:04:43.057592   31414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-817269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:04:43.057623   31414 kube-vip.go:115] generating kube-vip config ...
	I0914 00:04:43.057664   31414 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 00:04:43.070251   31414 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 00:04:43.070366   31414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 00:04:43.070424   31414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:04:43.081580   31414 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:04:43.081649   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 00:04:43.091860   31414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0914 00:04:43.109118   31414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:04:43.126551   31414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0914 00:04:43.143531   31414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 00:04:43.159852   31414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 00:04:43.165053   31414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:04:43.310897   31414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:04:43.326150   31414 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.132
	I0914 00:04:43.326182   31414 certs.go:194] generating shared ca certs ...
	I0914 00:04:43.326203   31414 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.326394   31414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:04:43.326444   31414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:04:43.326454   31414 certs.go:256] generating profile certs ...
	I0914 00:04:43.326531   31414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0914 00:04:43.326566   31414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427
	I0914 00:04:43.326583   31414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.68 192.168.39.254]
	I0914 00:04:43.445973   31414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 ...
	I0914 00:04:43.446007   31414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427: {Name:mk8b569386742ac48cb0304d4e3f1a765a9a2ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.446169   31414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427 ...
	I0914 00:04:43.446180   31414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427: {Name:mk073cdcbfc344b59cbade2545dc3d5aba23ec42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.446249   31414 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0914 00:04:43.446396   31414 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0914 00:04:43.446525   31414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0914 00:04:43.446541   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 00:04:43.446553   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 00:04:43.446563   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 00:04:43.446574   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 00:04:43.446584   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 00:04:43.446595   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 00:04:43.446607   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 00:04:43.446617   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 00:04:43.446665   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:04:43.446694   31414 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:04:43.446703   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:04:43.446726   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:04:43.446766   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:04:43.446792   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:04:43.446830   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:04:43.446858   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.446872   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.446884   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.447434   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:04:43.475330   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:04:43.501806   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:04:43.526820   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:04:43.550661   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 00:04:43.575143   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:04:43.599733   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:04:43.624993   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:04:43.649566   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:04:43.674794   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:04:43.701260   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:04:43.728385   31414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:04:43.754527   31414 ssh_runner.go:195] Run: openssl version
	I0914 00:04:43.763138   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:04:43.784506   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.789200   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.789258   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.794847   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:04:43.804441   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:04:43.815990   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.820707   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.820779   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.826497   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:04:43.836199   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:04:43.847502   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.852872   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.852962   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.859243   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:04:43.869668   31414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:04:43.875071   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:04:43.881321   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:04:43.887360   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:04:43.893013   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:04:43.898999   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:04:43.904830   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:04:43.910593   31414 kubeadm.go:392] StartCluster: {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:04:43.910727   31414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:04:43.910807   31414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:04:43.952873   31414 cri.go:89] found id: "3cd2250ea2d88345c8496fcdb842fc6a061cc676bfc39889a7db66e56a7988f5"
	I0914 00:04:43.952898   31414 cri.go:89] found id: "dd0bff85390e25c3ea3d3294406935d67d03bee37a44a5812fbe70914bf0adcb"
	I0914 00:04:43.952907   31414 cri.go:89] found id: "2a723ee0b6b3e403960334ef20660530ed192a996a1c504ada3caf9b4b0b0258"
	I0914 00:04:43.952911   31414 cri.go:89] found id: "61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997"
	I0914 00:04:43.952914   31414 cri.go:89] found id: "4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94"
	I0914 00:04:43.952920   31414 cri.go:89] found id: "315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead"
	I0914 00:04:43.952923   31414 cri.go:89] found id: "b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e"
	I0914 00:04:43.952925   31414 cri.go:89] found id: "f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d"
	I0914 00:04:43.952927   31414 cri.go:89] found id: "2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5"
	I0914 00:04:43.952934   31414 cri.go:89] found id: "45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5"
	I0914 00:04:43.952938   31414 cri.go:89] found id: "33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a"
	I0914 00:04:43.952941   31414 cri.go:89] found id: "a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96"
	I0914 00:04:43.952943   31414 cri.go:89] found id: "11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08"
	I0914 00:04:43.952946   31414 cri.go:89] found id: ""
	I0914 00:04:43.952986   31414 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 00:07:16 ha-817269 crio[3557]: time="2024-09-14 00:07:16.996045819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272436996023773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c016352a-d538-4487-960f-bebaecb18d4d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:16 ha-817269 crio[3557]: time="2024-09-14 00:07:16.996608869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=496ff000-13d1-4e54-a5dd-51148ecf1bcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:16 ha-817269 crio[3557]: time="2024-09-14 00:07:16.996671713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=496ff000-13d1-4e54-a5dd-51148ecf1bcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:16 ha-817269 crio[3557]: time="2024-09-14 00:07:16.997131390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=496ff000-13d1-4e54-a5dd-51148ecf1bcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.038261504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a13ee277-f7bf-4dc7-b451-f1a4c86b755c name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.038352566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a13ee277-f7bf-4dc7-b451-f1a4c86b755c name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.039357479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8d4c44a-f5b2-478d-8fdb-66b93b294894 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.039788165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272437039766789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8d4c44a-f5b2-478d-8fdb-66b93b294894 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.040288637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bce0c0f8-288a-4e2b-89b4-6f445219e7a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.040347958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce0c0f8-288a-4e2b-89b4-6f445219e7a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.043552868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bce0c0f8-288a-4e2b-89b4-6f445219e7a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.097478416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ceae64bb-5056-4324-a489-78fb856bf308 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.097585305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ceae64bb-5056-4324-a489-78fb856bf308 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.098480787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14437b4e-65b8-4b2b-8d75-a2dbe60ac66a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.098926852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272437098865601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14437b4e-65b8-4b2b-8d75-a2dbe60ac66a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.099866000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0b281b4-fd4d-4a70-b74a-94dad58e9dfc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.099950417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0b281b4-fd4d-4a70-b74a-94dad58e9dfc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.100367843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0b281b4-fd4d-4a70-b74a-94dad58e9dfc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.143684255Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5de3c55b-a349-4e03-9b0b-ad80d1866309 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.143777984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5de3c55b-a349-4e03-9b0b-ad80d1866309 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.144954806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=727ab3ed-f301-45b0-b260-58a9c664b635 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.145533818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272437145506268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=727ab3ed-f301-45b0-b260-58a9c664b635 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.146231411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e51000e0-6225-497e-b1d6-09465e8b400c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.146302518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e51000e0-6225-497e-b1d6-09465e8b400c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:07:17 ha-817269 crio[3557]: time="2024-09-14 00:07:17.146705273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e51000e0-6225-497e-b1d6-09465e8b400c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0fd509e7c4da3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   19acc9fdd3bd4       storage-provisioner
	95b4d7f4a781a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   41d6b53cb5d9c       kube-controller-manager-ha-817269
	c1923ec759795       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   363fa8cab215a       kube-apiserver-ha-817269
	a9f19809f8575       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   19acc9fdd3bd4       storage-provisioner
	d52647c00652e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   14e0ffe608297       busybox-7dff88458-5cbmn
	ed6377406153c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   f3da812aa6182       kube-vip-ha-817269
	3b8be9d7ef173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   e62856cb4921c       kube-proxy-p9lkl
	99eea9846e2a3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   ad37d6b4ac39b       coredns-7c65d6cfc9-mwpbw
	febbe47268729       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   8075147ed19b1       kindnet-dxj2g
	7a85a86036d4e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   02d05f666d12d       etcd-ha-817269
	acc0f4c63f717       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   2aaae96708eb2       coredns-7c65d6cfc9-rq5pv
	1eb000680b819       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   363fa8cab215a       kube-apiserver-ha-817269
	fcffbcbfeb991       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   41d6b53cb5d9c       kube-controller-manager-ha-817269
	c73accba880ce       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   b8362da423aa6       kube-scheduler-ha-817269
	4c3d244ad4c30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   2ff13b6745379       busybox-7dff88458-5cbmn
	61abb6eb65e46       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   36c20ca07db88       coredns-7c65d6cfc9-rq5pv
	4ce76346be5b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   8b315def4f628       coredns-7c65d6cfc9-mwpbw
	b992c3b895609       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   f453fe4fb77a3       kindnet-dxj2g
	f8f2322f127fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   babdf5981ec86       kube-proxy-p9lkl
	45371c7b7dce4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   eccbef0ef4d20       kube-scheduler-ha-817269
	33ac2ce16b58b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   0ea1c016c25f7       etcd-ha-817269
	
	
	==> coredns [4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94] <==
	[INFO] 10.244.2.2:53222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165792s
	[INFO] 10.244.2.2:51300 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207369s
	[INFO] 10.244.2.2:56912 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110533s
	[INFO] 10.244.2.2:37804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204459s
	[INFO] 10.244.1.2:54436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226539s
	[INFO] 10.244.1.2:56082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001819826s
	[INFO] 10.244.1.2:58316 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222276s
	[INFO] 10.244.1.2:42306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083319s
	[INFO] 10.244.0.4:53876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020116s
	[INFO] 10.244.0.4:56768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013293s
	[INFO] 10.244.0.4:47653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.0.4:50365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154019s
	[INFO] 10.244.2.2:56862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195398s
	[INFO] 10.244.2.2:40784 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189124s
	[INFO] 10.244.2.2:42797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106937s
	[INFO] 10.244.1.2:49876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246067s
	[INFO] 10.244.0.4:44026 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299901s
	[INFO] 10.244.0.4:40123 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000233032s
	[INFO] 10.244.1.2:42204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000500811s
	[INFO] 10.244.1.2:44587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205062s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1861&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1859&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1864&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997] <==
	[INFO] 10.244.0.4:39998 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005418524s
	[INFO] 10.244.0.4:57052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284617s
	[INFO] 10.244.0.4:59585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149604s
	[INFO] 10.244.2.2:44013 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0019193s
	[INFO] 10.244.2.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022048s
	[INFO] 10.244.2.2:33172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513908s
	[INFO] 10.244.1.2:35965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790224s
	[INFO] 10.244.1.2:42555 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000321828s
	[INFO] 10.244.1.2:54761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123494s
	[INFO] 10.244.1.2:51742 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176208s
	[INFO] 10.244.2.2:55439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172115s
	[INFO] 10.244.1.2:32823 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000209293s
	[INFO] 10.244.1.2:54911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191869s
	[INFO] 10.244.1.2:45538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090559s
	[INFO] 10.244.0.4:51099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293009s
	[INFO] 10.244.0.4:52402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204563s
	[INFO] 10.244.2.2:48710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318957s
	[INFO] 10.244.2.2:51855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124089s
	[INFO] 10.244.2.2:54763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000257295s
	[INFO] 10.244.2.2:56836 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186617s
	[INFO] 10.244.1.2:45824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223466s
	[INFO] 10.244.1.2:32974 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143816s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [99eea9846e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a] <==
	Trace[926961384]: [10.001431094s] [10.001431094s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59914->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50248->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50248->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49256->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49256->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-817269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:53:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:07:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:54:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-817269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc026746bcc47d49a7f508137c16c0a
	  System UUID:                0bc02674-6bcc-47d4-9a7f-508137c16c0a
	  Boot ID:                    1a383d96-7a2a-4a67-94ca-0f262bc14568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5cbmn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-mwpbw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-rq5pv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-817269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-dxj2g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-817269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-817269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-p9lkl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-817269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-817269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 102s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-817269 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-817269 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-817269 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-817269 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Warning  ContainerGCFailed        3m25s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m49s (x3 over 3m38s)  kubelet          Node ha-817269 status is now: NodeNotReady
	  Normal   RegisteredNode           103s                   node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           97s                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	
	
	Name:               ha-817269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:54:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:07:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-817269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 260fc9ca7fe3421fbf6de250d4218230
	  System UUID:                260fc9ca-7fe3-421f-bf6d-e250d4218230
	  Boot ID:                    eee86d22-fdc7-4135-a072-8893326e7e42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wff9f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-817269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-qcfqk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-817269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-817269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7t9b2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-817269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-817269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  NodeNotReady             8m57s                  node-controller  Node ha-817269-m02 status is now: NodeNotReady
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                   node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           97s                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	
	
	Name:               ha-817269-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_56_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:56:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:06:52 +0000   Sat, 14 Sep 2024 00:06:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:06:52 +0000   Sat, 14 Sep 2024 00:06:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:06:52 +0000   Sat, 14 Sep 2024 00:06:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:06:52 +0000   Sat, 14 Sep 2024 00:06:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-817269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cd9a23c8c734501a4ad2e1089d5fd49
	  System UUID:                7cd9a23c-8c73-4501-a4ad-2e1089d5fd49
	  Boot ID:                    4e349745-b996-4c1f-b958-eac514dd0200
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vsts4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-817269-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-np2s8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-817269-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-817269-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bwr6g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-817269-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-817269-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-817269-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal   RegisteredNode           97s                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	  Normal   NodeNotReady             63s                node-controller  Node ha-817269-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-817269-m03 has been rebooted, boot id: 4e349745-b996-4c1f-b958-eac514dd0200
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-817269-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-817269-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                55s                kubelet          Node ha-817269-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-817269-m03 event: Registered Node ha-817269-m03 in Controller
	
	
	Name:               ha-817269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_57_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:57:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:07:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:07:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:07:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:07:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-817269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5153efd89a4042b8870c772e8476a0
	  System UUID:                ca5153ef-d89a-4042-b887-0c772e8476a0
	  Boot ID:                    3c4fbdb7-4778-4101-b683-94856a940de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-45h44       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-b8pch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   NodeReady                9m48s              kubelet          Node ha-817269-m04 status is now: NodeReady
	  Normal   RegisteredNode           103s               node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           97s                node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   NodeNotReady             63s                node-controller  Node ha-817269-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-817269-m04 has been rebooted, boot id: 3c4fbdb7-4778-4101-b683-94856a940de0
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-817269-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.741095] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.067049] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057264] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.183859] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.112834] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.262666] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.811628] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.142511] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066169] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.376137] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.080016] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.054020] kauditd_printk_skb: 26 callbacks suppressed
	[Sep13 23:54] kauditd_printk_skb: 35 callbacks suppressed
	[ +43.648430] kauditd_printk_skb: 24 callbacks suppressed
	[Sep14 00:04] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
	[  +0.142916] systemd-fstab-generator[3493]: Ignoring "noauto" option for root device
	[  +0.175540] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.151389] systemd-fstab-generator[3519]: Ignoring "noauto" option for root device
	[  +0.284940] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +1.260334] systemd-fstab-generator[3643]: Ignoring "noauto" option for root device
	[  +6.617405] kauditd_printk_skb: 122 callbacks suppressed
	[Sep14 00:05] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.052009] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.843404] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.725362] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a] <==
	{"level":"warn","ts":"2024-09-14T00:03:09.690327Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:03:01.840606Z","time spent":"7.825688811s","remote":"127.0.0.1:42408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	2024/09/14 00:03:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-14T00:03:09.754281Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.132:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:03:09.754342Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.132:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:03:09.754460Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6ae81251a1433dae","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-14T00:03:09.754772Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.754943Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755143Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755287Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755358Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755541Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755636Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755714Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755832Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755937Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755983Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.756015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.756043Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.759756Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.132:2380"}
	{"level":"warn","ts":"2024-09-14T00:03:09.759860Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.927911962s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-14T00:03:09.759925Z","caller":"traceutil/trace.go:171","msg":"trace[286015042] range","detail":"{range_begin:; range_end:; }","duration":"8.927983849s","start":"2024-09-14T00:03:00.831925Z","end":"2024-09-14T00:03:09.759909Z","steps":["trace[286015042] 'agreement among raft nodes before linearized reading'  (duration: 8.927911186s)"],"step_count":1}
	{"level":"error","ts":"2024-09-14T00:03:09.759977Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-14T00:03:09.759974Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.132:2380"}
	{"level":"info","ts":"2024-09-14T00:03:09.760064Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-817269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.132:2380"],"advertise-client-urls":["https://192.168.39.132:2379"]}
	
	
	==> etcd [7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29] <==
	{"level":"warn","ts":"2024-09-14T00:06:16.419843Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-14T00:06:16.419961Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-14T00:06:16.485735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:16.585345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:16.621756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:16.685619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:16.785653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:16.885982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ae81251a1433dae","from":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T00:06:17.880810Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.68:2380/version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:17.880864Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:21.420621Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:21.420681Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:21.882791Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.68:2380/version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:21.882944Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:25.885348Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.68:2380/version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:25.885663Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9bbbc15eae36a36d","error":"Get \"https://192.168.39.68:2380/version\": dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:26.421505Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T00:06:26.421672Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9bbbc15eae36a36d","rtt":"0s","error":"dial tcp 192.168.39.68:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-14T00:06:28.770589Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.770710Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.785676Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.786411Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ae81251a1433dae","to":"9bbbc15eae36a36d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-14T00:06:28.786457Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.788841Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ae81251a1433dae","to":"9bbbc15eae36a36d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-14T00:06:28.789062Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	
	
	==> kernel <==
	 00:07:17 up 14 min,  0 users,  load average: 0.55, 0.61, 0.33
	Linux ha-817269 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e] <==
	I0914 00:02:48.534383       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:02:48.534523       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:02:48.534758       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:02:48.534824       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:02:48.534948       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:02:48.534990       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:02:48.535144       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:02:48.535189       1 main.go:299] handling current node
	I0914 00:02:58.534896       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:02:58.535076       1 main.go:299] handling current node
	I0914 00:02:58.535196       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:02:58.535221       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:02:58.535448       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:02:58.535477       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:02:58.535567       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:02:58.535588       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	E0914 00:03:00.216891       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1841&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0914 00:03:08.535462       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:03:08.535561       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:03:08.535792       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:03:08.535820       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:03:08.535880       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:03:08.535899       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:03:08.536064       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:03:08.536165       1 main.go:299] handling current node
	
	
	==> kindnet [febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32] <==
	I0914 00:06:41.578494       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:06:51.576618       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:06:51.576696       1 main.go:299] handling current node
	I0914 00:06:51.576735       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:06:51.576741       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:06:51.576913       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:06:51.576932       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:06:51.576983       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:06:51.576988       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:07:01.584790       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:07:01.584844       1 main.go:299] handling current node
	I0914 00:07:01.584862       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:07:01.584869       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:07:01.585158       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:07:01.585227       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:07:01.585319       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:07:01.585341       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:07:11.578783       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:07:11.578984       1 main.go:299] handling current node
	I0914 00:07:11.579052       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:07:11.579081       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:07:11.579270       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:07:11.579293       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:07:11.579363       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:07:11.579386       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d] <==
	I0914 00:04:50.735817       1 options.go:228] external host was not specified, using 192.168.39.132
	I0914 00:04:50.750351       1 server.go:142] Version: v1.31.1
	I0914 00:04:50.750421       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:04:51.642309       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0914 00:04:51.652164       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:04:51.658522       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0914 00:04:51.658599       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0914 00:04:51.658859       1 instance.go:232] Using reconciler: lease
	W0914 00:05:11.639461       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0914 00:05:11.639604       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0914 00:05:11.661619       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260] <==
	I0914 00:05:37.538507       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0914 00:05:37.538599       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0914 00:05:37.629602       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 00:05:37.637732       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:05:37.637766       1 policy_source.go:224] refreshing policies
	I0914 00:05:37.644537       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0914 00:05:37.648914       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6 192.168.39.68]
	I0914 00:05:37.649057       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 00:05:37.649229       1 aggregator.go:171] initial CRD sync complete...
	I0914 00:05:37.649277       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 00:05:37.649287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 00:05:37.649295       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:05:37.650861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 00:05:37.651319       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 00:05:37.651644       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 00:05:37.652948       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 00:05:37.653293       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 00:05:37.653324       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 00:05:37.653444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:05:37.659605       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0914 00:05:37.663068       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0914 00:05:37.665970       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 00:05:37.726769       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:05:38.557294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 00:05:38.981830       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132 192.168.39.6 192.168.39.68]
	
	
	==> kube-controller-manager [95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994] <==
	I0914 00:06:01.170420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.825µs"
	I0914 00:06:14.148471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:14.148585       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0914 00:06:14.152233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:14.174277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:14.189230       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:14.272745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.184368ms"
	I0914 00:06:14.273348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="112.577µs"
	I0914 00:06:15.999614       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:16.087289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m02"
	I0914 00:06:19.462700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:22.036715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:22.051708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:22.987457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.345µs"
	I0914 00:06:24.370740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:06:29.548126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:38.345657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:38.441590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:06:41.262842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.743912ms"
	I0914 00:06:41.263118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.404µs"
	I0914 00:06:52.849798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m03"
	I0914 00:07:09.164272       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-817269-m04"
	I0914 00:07:09.165057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:07:09.185601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:07:09.394189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	
	
	==> kube-controller-manager [fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588] <==
	I0914 00:04:51.614359       1 serving.go:386] Generated self-signed cert in-memory
	I0914 00:04:52.182194       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 00:04:52.182231       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:04:52.186316       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 00:04:52.186814       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 00:04:52.187007       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 00:04:52.187285       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 00:05:12.669897       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.132:8443/healthz\": dial tcp 192.168.39.132:8443: connect: connection refused"
	
	
	==> kube-proxy [3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:04:54.966811       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:04:58.038963       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:01.111924       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:07.256381       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:16.470637       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:34.902911       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0914 00:05:34.903145       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0914 00:05:34.903285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:05:34.936601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:05:34.936656       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:05:34.936687       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:05:34.939049       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:05:34.939561       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:05:34.939615       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:05:34.941337       1 config.go:199] "Starting service config controller"
	I0914 00:05:34.941412       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:05:34.941468       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:05:34.941485       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:05:34.942258       1 config.go:328] "Starting node config controller"
	I0914 00:05:34.942302       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:05:35.941567       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:05:35.941837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:05:35.942604       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d] <==
	E0914 00:01:51.798726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:51.798688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:51.798800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:51.798882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:51.798951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:59.286566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	W0914 00:01:59.286709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:59.287429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:59.286645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:59.287564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0914 00:01:59.286712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:09.080353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	W0914 00:02:09.079945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:09.081225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0914 00:02:09.081174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:09.081523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:09.081616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:24.438902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:24.439191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:33.655450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:33.655620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:36.728544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:36.728734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:03:07.447601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:03:07.447703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5] <==
	E0913 23:56:30.847268       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" pod="default/busybox-7dff88458-wff9f"
	E0913 23:56:30.906194       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:56:30.906282       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e288c7d7-36f3-4fd1-a944-403098141304(default/busybox-7dff88458-5cbmn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5cbmn"
	E0913 23:56:30.906305       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" pod="default/busybox-7dff88458-5cbmn"
	I0913 23:56:30.906349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:57:08.751565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0913 23:57:08.751687       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 234c68a0-c2e4-4784-8bda-6c0a1ffc84db(kube-system/kube-proxy-tdcn8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tdcn8"
	E0913 23:57:08.751719       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" pod="kube-system/kube-proxy-tdcn8"
	I0913 23:57:08.751751       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0914 00:02:54.572065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0914 00:02:56.475040       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0914 00:02:56.618334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0914 00:02:58.116858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0914 00:02:59.284667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0914 00:02:59.656135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0914 00:02:59.874994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0914 00:03:00.576217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0914 00:03:00.695879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0914 00:03:00.864188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0914 00:03:01.453705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0914 00:03:01.821481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0914 00:03:03.366617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0914 00:03:05.265365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0914 00:03:05.362409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0914 00:03:09.649615       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e] <==
	W0914 00:05:30.354015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.132:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:30.354081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.132:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:30.621298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.132:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:30.621378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.132:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.121908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.132:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.122030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.132:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.506592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.132:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.506704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.132:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.871223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.132:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.871281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.132:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.881048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.132:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.881181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.132:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.101270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.132:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.101323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.132:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.632634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.132:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.632720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.132:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.675686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.132:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.676138       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.132:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:33.126707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.132:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:33.126786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.132:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:34.692803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.132:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:34.692929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.132:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:35.401005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.132:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:35.401073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.132:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	I0914 00:05:55.381748       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:06:02 ha-817269 kubelet[1306]: E0914 00:06:02.586890    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272362583756958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:02 ha-817269 kubelet[1306]: E0914 00:06:02.587329    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272362583756958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:07 ha-817269 kubelet[1306]: I0914 00:06:07.363343    1306 scope.go:117] "RemoveContainer" containerID="a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45"
	Sep 14 00:06:12 ha-817269 kubelet[1306]: E0914 00:06:12.589463    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272372588974429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:12 ha-817269 kubelet[1306]: E0914 00:06:12.589532    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272372588974429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:22 ha-817269 kubelet[1306]: E0914 00:06:22.592030    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272382591079683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:22 ha-817269 kubelet[1306]: E0914 00:06:22.592075    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272382591079683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:27 ha-817269 kubelet[1306]: I0914 00:06:27.363693    1306 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-817269" podUID="1fda5312-9aa8-4ab9-b2db-178289f09fd1"
	Sep 14 00:06:27 ha-817269 kubelet[1306]: I0914 00:06:27.381714    1306 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-817269"
	Sep 14 00:06:32 ha-817269 kubelet[1306]: I0914 00:06:32.441952    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-817269" podStartSLOduration=5.4418851 podStartE2EDuration="5.4418851s" podCreationTimestamp="2024-09-14 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-14 00:06:32.44175767 +0000 UTC m=+760.214190006" watchObservedRunningTime="2024-09-14 00:06:32.4418851 +0000 UTC m=+760.214317432"
	Sep 14 00:06:32 ha-817269 kubelet[1306]: E0914 00:06:32.594066    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272392593762799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:32 ha-817269 kubelet[1306]: E0914 00:06:32.594840    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272392593762799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:42 ha-817269 kubelet[1306]: E0914 00:06:42.597021    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272402596668694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:42 ha-817269 kubelet[1306]: E0914 00:06:42.597057    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272402596668694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:52 ha-817269 kubelet[1306]: E0914 00:06:52.379767    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:06:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:06:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:06:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:06:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:06:52 ha-817269 kubelet[1306]: E0914 00:06:52.601293    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272412598786952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:06:52 ha-817269 kubelet[1306]: E0914 00:06:52.601328    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272412598786952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:07:02 ha-817269 kubelet[1306]: E0914 00:07:02.602668    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272422602408162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:07:02 ha-817269 kubelet[1306]: E0914 00:07:02.602707    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272422602408162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:07:12 ha-817269 kubelet[1306]: E0914 00:07:12.604279    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272432603994315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:07:12 ha-817269 kubelet[1306]: E0914 00:07:12.604680    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272432603994315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:07:16.713485   32813 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19640-5422/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-817269 -n ha-817269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-817269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 stop -v=7 --alsologtostderr
E0914 00:09:31.534974   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 stop -v=7 --alsologtostderr: exit status 82 (2m0.461056267s)

                                                
                                                
-- stdout --
	* Stopping node "ha-817269-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:07:36.002748   33208 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:07:36.002870   33208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:07:36.002878   33208 out.go:358] Setting ErrFile to fd 2...
	I0914 00:07:36.002883   33208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:07:36.003082   33208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:07:36.003319   33208 out.go:352] Setting JSON to false
	I0914 00:07:36.003394   33208 mustload.go:65] Loading cluster: ha-817269
	I0914 00:07:36.003809   33208 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:07:36.003906   33208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0914 00:07:36.004097   33208 mustload.go:65] Loading cluster: ha-817269
	I0914 00:07:36.004226   33208 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:07:36.004248   33208 stop.go:39] StopHost: ha-817269-m04
	I0914 00:07:36.004623   33208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:07:36.004663   33208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:07:36.019777   33208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
	I0914 00:07:36.020354   33208 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:07:36.020906   33208 main.go:141] libmachine: Using API Version  1
	I0914 00:07:36.020927   33208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:07:36.021287   33208 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:07:36.023587   33208 out.go:177] * Stopping node "ha-817269-m04"  ...
	I0914 00:07:36.024915   33208 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:07:36.024956   33208 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:07:36.025197   33208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:07:36.025224   33208 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:07:36.028220   33208 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:07:36.028611   33208 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 01:07:04 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:07:36.028646   33208 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:07:36.028771   33208 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:07:36.028937   33208 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:07:36.029056   33208 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:07:36.029203   33208 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	I0914 00:07:36.110015   33208 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:07:36.162103   33208 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:07:36.214273   33208 main.go:141] libmachine: Stopping "ha-817269-m04"...
	I0914 00:07:36.214310   33208 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:07:36.216062   33208 main.go:141] libmachine: (ha-817269-m04) Calling .Stop
	I0914 00:07:36.219708   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 0/120
	I0914 00:07:37.221556   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 1/120
	I0914 00:07:38.223474   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 2/120
	I0914 00:07:39.225259   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 3/120
	I0914 00:07:40.226460   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 4/120
	I0914 00:07:41.228598   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 5/120
	I0914 00:07:42.230274   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 6/120
	I0914 00:07:43.231997   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 7/120
	I0914 00:07:44.234260   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 8/120
	I0914 00:07:45.235806   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 9/120
	I0914 00:07:46.237264   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 10/120
	I0914 00:07:47.238761   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 11/120
	I0914 00:07:48.240094   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 12/120
	I0914 00:07:49.242416   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 13/120
	I0914 00:07:50.243681   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 14/120
	I0914 00:07:51.245564   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 15/120
	I0914 00:07:52.246831   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 16/120
	I0914 00:07:53.248137   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 17/120
	I0914 00:07:54.250355   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 18/120
	I0914 00:07:55.251984   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 19/120
	I0914 00:07:56.254069   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 20/120
	I0914 00:07:57.255427   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 21/120
	I0914 00:07:58.256952   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 22/120
	I0914 00:07:59.258303   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 23/120
	I0914 00:08:00.259564   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 24/120
	I0914 00:08:01.261566   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 25/120
	I0914 00:08:02.263012   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 26/120
	I0914 00:08:03.264316   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 27/120
	I0914 00:08:04.265925   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 28/120
	I0914 00:08:05.267357   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 29/120
	I0914 00:08:06.269315   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 30/120
	I0914 00:08:07.270669   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 31/120
	I0914 00:08:08.272128   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 32/120
	I0914 00:08:09.273528   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 33/120
	I0914 00:08:10.274826   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 34/120
	I0914 00:08:11.276917   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 35/120
	I0914 00:08:12.278314   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 36/120
	I0914 00:08:13.279737   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 37/120
	I0914 00:08:14.281363   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 38/120
	I0914 00:08:15.284147   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 39/120
	I0914 00:08:16.286181   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 40/120
	I0914 00:08:17.288417   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 41/120
	I0914 00:08:18.289756   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 42/120
	I0914 00:08:19.291146   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 43/120
	I0914 00:08:20.292523   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 44/120
	I0914 00:08:21.294563   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 45/120
	I0914 00:08:22.295923   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 46/120
	I0914 00:08:23.297146   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 47/120
	I0914 00:08:24.298425   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 48/120
	I0914 00:08:25.299683   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 49/120
	I0914 00:08:26.301669   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 50/120
	I0914 00:08:27.303076   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 51/120
	I0914 00:08:28.304387   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 52/120
	I0914 00:08:29.305765   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 53/120
	I0914 00:08:30.307182   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 54/120
	I0914 00:08:31.308999   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 55/120
	I0914 00:08:32.310640   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 56/120
	I0914 00:08:33.312149   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 57/120
	I0914 00:08:34.313412   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 58/120
	I0914 00:08:35.314811   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 59/120
	I0914 00:08:36.316896   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 60/120
	I0914 00:08:37.318217   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 61/120
	I0914 00:08:38.319586   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 62/120
	I0914 00:08:39.320989   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 63/120
	I0914 00:08:40.322201   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 64/120
	I0914 00:08:41.324385   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 65/120
	I0914 00:08:42.325636   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 66/120
	I0914 00:08:43.326861   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 67/120
	I0914 00:08:44.328349   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 68/120
	I0914 00:08:45.330188   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 69/120
	I0914 00:08:46.332644   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 70/120
	I0914 00:08:47.334106   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 71/120
	I0914 00:08:48.335968   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 72/120
	I0914 00:08:49.337425   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 73/120
	I0914 00:08:50.338702   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 74/120
	I0914 00:08:51.340715   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 75/120
	I0914 00:08:52.342319   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 76/120
	I0914 00:08:53.343732   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 77/120
	I0914 00:08:54.345073   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 78/120
	I0914 00:08:55.346455   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 79/120
	I0914 00:08:56.348862   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 80/120
	I0914 00:08:57.350655   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 81/120
	I0914 00:08:58.352045   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 82/120
	I0914 00:08:59.354119   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 83/120
	I0914 00:09:00.355506   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 84/120
	I0914 00:09:01.357503   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 85/120
	I0914 00:09:02.359491   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 86/120
	I0914 00:09:03.361136   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 87/120
	I0914 00:09:04.362639   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 88/120
	I0914 00:09:05.364181   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 89/120
	I0914 00:09:06.366591   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 90/120
	I0914 00:09:07.367872   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 91/120
	I0914 00:09:08.369393   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 92/120
	I0914 00:09:09.370811   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 93/120
	I0914 00:09:10.372363   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 94/120
	I0914 00:09:11.374460   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 95/120
	I0914 00:09:12.376344   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 96/120
	I0914 00:09:13.378311   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 97/120
	I0914 00:09:14.379823   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 98/120
	I0914 00:09:15.381228   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 99/120
	I0914 00:09:16.383333   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 100/120
	I0914 00:09:17.384593   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 101/120
	I0914 00:09:18.385900   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 102/120
	I0914 00:09:19.387106   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 103/120
	I0914 00:09:20.388391   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 104/120
	I0914 00:09:21.390299   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 105/120
	I0914 00:09:22.391521   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 106/120
	I0914 00:09:23.393428   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 107/120
	I0914 00:09:24.394896   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 108/120
	I0914 00:09:25.396759   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 109/120
	I0914 00:09:26.398724   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 110/120
	I0914 00:09:27.400236   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 111/120
	I0914 00:09:28.401881   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 112/120
	I0914 00:09:29.403147   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 113/120
	I0914 00:09:30.404510   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 114/120
	I0914 00:09:31.406558   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 115/120
	I0914 00:09:32.409120   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 116/120
	I0914 00:09:33.410269   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 117/120
	I0914 00:09:34.411547   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 118/120
	I0914 00:09:35.413167   33208 main.go:141] libmachine: (ha-817269-m04) Waiting for machine to stop 119/120
	I0914 00:09:36.413948   33208 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 00:09:36.414006   33208 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 00:09:36.415647   33208 out.go:201] 
	W0914 00:09:36.416849   33208 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 00:09:36.416866   33208 out.go:270] * 
	* 
	W0914 00:09:36.419039   33208 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:09:36.420170   33208 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-817269 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr: exit status 3 (18.985933051s)

                                                
                                                
-- stdout --
	ha-817269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-817269-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:09:36.466397   33655 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:09:36.466646   33655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:09:36.466656   33655 out.go:358] Setting ErrFile to fd 2...
	I0914 00:09:36.466660   33655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:09:36.466827   33655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:09:36.466988   33655 out.go:352] Setting JSON to false
	I0914 00:09:36.467020   33655 mustload.go:65] Loading cluster: ha-817269
	I0914 00:09:36.467068   33655 notify.go:220] Checking for updates...
	I0914 00:09:36.467617   33655 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:09:36.467641   33655 status.go:255] checking status of ha-817269 ...
	I0914 00:09:36.468114   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.468164   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.484366   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0914 00:09:36.484826   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.485435   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.485454   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.485791   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.485973   33655 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:09:36.487625   33655 status.go:330] ha-817269 host status = "Running" (err=<nil>)
	I0914 00:09:36.487642   33655 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:09:36.487943   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.487992   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.502635   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0914 00:09:36.503125   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.503687   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.503714   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.504134   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.504339   33655 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:09:36.506849   33655 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:09:36.507281   33655 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:09:36.507309   33655 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:09:36.507374   33655 host.go:66] Checking if "ha-817269" exists ...
	I0914 00:09:36.507699   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.507733   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.523113   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34761
	I0914 00:09:36.523569   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.524006   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.524029   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.524331   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.524519   33655 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:09:36.524713   33655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:09:36.524742   33655 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:09:36.527645   33655 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:09:36.528083   33655 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:09:36.528118   33655 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:09:36.528241   33655 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:09:36.528440   33655 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:09:36.528614   33655 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:09:36.528774   33655 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:09:36.617107   33655 ssh_runner.go:195] Run: systemctl --version
	I0914 00:09:36.624642   33655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:09:36.642499   33655 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:09:36.642533   33655 api_server.go:166] Checking apiserver status ...
	I0914 00:09:36.642593   33655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:09:36.659001   33655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4878/cgroup
	W0914 00:09:36.670213   33655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4878/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:09:36.670276   33655 ssh_runner.go:195] Run: ls
	I0914 00:09:36.675504   33655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:09:36.680187   33655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:09:36.680217   33655 status.go:422] ha-817269 apiserver status = Running (err=<nil>)
	I0914 00:09:36.680229   33655 status.go:257] ha-817269 status: &{Name:ha-817269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:09:36.680250   33655 status.go:255] checking status of ha-817269-m02 ...
	I0914 00:09:36.680623   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.680694   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.695571   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0914 00:09:36.696104   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.696755   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.696780   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.697059   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.697241   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetState
	I0914 00:09:36.698660   33655 status.go:330] ha-817269-m02 host status = "Running" (err=<nil>)
	I0914 00:09:36.698674   33655 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:09:36.698971   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.699004   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.714215   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0914 00:09:36.714782   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.715268   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.715297   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.715594   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.715798   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetIP
	I0914 00:09:36.719034   33655 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:09:36.719476   33655 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 01:04:54 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:09:36.719499   33655 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:09:36.719728   33655 host.go:66] Checking if "ha-817269-m02" exists ...
	I0914 00:09:36.720050   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.720090   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.735545   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0914 00:09:36.736091   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.736590   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.736619   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.736911   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.737079   33655 main.go:141] libmachine: (ha-817269-m02) Calling .DriverName
	I0914 00:09:36.737239   33655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:09:36.737267   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHHostname
	I0914 00:09:36.740071   33655 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:09:36.740453   33655 main.go:141] libmachine: (ha-817269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:e8:40", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 01:04:54 +0000 UTC Type:0 Mac:52:54:00:12:e8:40 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-817269-m02 Clientid:01:52:54:00:12:e8:40}
	I0914 00:09:36.740476   33655 main.go:141] libmachine: (ha-817269-m02) DBG | domain ha-817269-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:12:e8:40 in network mk-ha-817269
	I0914 00:09:36.740613   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHPort
	I0914 00:09:36.740776   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHKeyPath
	I0914 00:09:36.740918   33655 main.go:141] libmachine: (ha-817269-m02) Calling .GetSSHUsername
	I0914 00:09:36.741052   33655 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m02/id_rsa Username:docker}
	I0914 00:09:36.824783   33655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:09:36.841621   33655 kubeconfig.go:125] found "ha-817269" server: "https://192.168.39.254:8443"
	I0914 00:09:36.841655   33655 api_server.go:166] Checking apiserver status ...
	I0914 00:09:36.841698   33655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:09:36.869469   33655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0914 00:09:36.881315   33655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:09:36.881368   33655 ssh_runner.go:195] Run: ls
	I0914 00:09:36.886247   33655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 00:09:36.891727   33655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 00:09:36.891750   33655 status.go:422] ha-817269-m02 apiserver status = Running (err=<nil>)
	I0914 00:09:36.891758   33655 status.go:257] ha-817269-m02 status: &{Name:ha-817269-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:09:36.891773   33655 status.go:255] checking status of ha-817269-m04 ...
	I0914 00:09:36.892107   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.892157   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.906966   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0914 00:09:36.907463   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.908058   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.908082   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.908402   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.908625   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetState
	I0914 00:09:36.910402   33655 status.go:330] ha-817269-m04 host status = "Running" (err=<nil>)
	I0914 00:09:36.910421   33655 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:09:36.910774   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.910816   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.927483   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0914 00:09:36.927952   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.928572   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.928600   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.928992   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.929203   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetIP
	I0914 00:09:36.932431   33655 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:09:36.932954   33655 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 01:07:04 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:09:36.932981   33655 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:09:36.933159   33655 host.go:66] Checking if "ha-817269-m04" exists ...
	I0914 00:09:36.933470   33655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:09:36.933506   33655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:09:36.948929   33655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 00:09:36.949443   33655 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:09:36.949974   33655 main.go:141] libmachine: Using API Version  1
	I0914 00:09:36.949992   33655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:09:36.950323   33655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:09:36.950598   33655 main.go:141] libmachine: (ha-817269-m04) Calling .DriverName
	I0914 00:09:36.950887   33655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:09:36.950910   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHHostname
	I0914 00:09:36.954063   33655 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:09:36.954489   33655 main.go:141] libmachine: (ha-817269-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:81:be", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 01:07:04 +0000 UTC Type:0 Mac:52:54:00:3f:81:be Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-817269-m04 Clientid:01:52:54:00:3f:81:be}
	I0914 00:09:36.954622   33655 main.go:141] libmachine: (ha-817269-m04) DBG | domain ha-817269-m04 has defined IP address 192.168.39.248 and MAC address 52:54:00:3f:81:be in network mk-ha-817269
	I0914 00:09:36.954682   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHPort
	I0914 00:09:36.954902   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHKeyPath
	I0914 00:09:36.955025   33655 main.go:141] libmachine: (ha-817269-m04) Calling .GetSSHUsername
	I0914 00:09:36.955195   33655 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269-m04/id_rsa Username:docker}
	W0914 00:09:55.408018   33655 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.248:22: connect: no route to host
	W0914 00:09:55.408108   33655 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	E0914 00:09:55.408128   33655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	I0914 00:09:55.408164   33655 status.go:257] ha-817269-m04 status: &{Name:ha-817269-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0914 00:09:55.408190   33655 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-817269 -n ha-817269
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-817269 logs -n 25: (1.652495247s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m04 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp testdata/cp-test.txt                                               | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269:/home/docker/cp-test_ha-817269-m04_ha-817269.txt                      |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269 sudo cat                                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269.txt                                |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m02:/home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m02 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt                             | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m03:/home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n                                                                | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | ha-817269-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-817269 ssh -n ha-817269-m03 sudo cat                                         | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC | 13 Sep 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-817269 node stop m02 -v=7                                                    | ha-817269 | jenkins | v1.34.0 | 13 Sep 24 23:57 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-817269 node start m02 -v=7                                                   | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-817269 -v=7                                                          | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-817269 -v=7                                                               | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-817269 --wait=true -v=7                                                   | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:03 UTC | 14 Sep 24 00:07 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-817269                                                               | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:07 UTC |                     |
	| node    | ha-817269 node delete m03 -v=7                                                  | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:07 UTC | 14 Sep 24 00:07 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-817269 stop -v=7                                                             | ha-817269 | jenkins | v1.34.0 | 14 Sep 24 00:07 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:03:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:03:08.816919   31414 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:03:08.817145   31414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:03:08.817152   31414 out.go:358] Setting ErrFile to fd 2...
	I0914 00:03:08.817156   31414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:03:08.817344   31414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:03:08.817908   31414 out.go:352] Setting JSON to false
	I0914 00:03:08.818813   31414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2735,"bootTime":1726269454,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:03:08.818907   31414 start.go:139] virtualization: kvm guest
	I0914 00:03:08.821258   31414 out.go:177] * [ha-817269] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:03:08.822403   31414 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:03:08.822412   31414 notify.go:220] Checking for updates...
	I0914 00:03:08.823615   31414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:03:08.824741   31414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:03:08.825852   31414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:03:08.826725   31414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:03:08.827809   31414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:03:08.829519   31414 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:03:08.829619   31414 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:03:08.830107   31414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:03:08.830157   31414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:03:08.846135   31414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0914 00:03:08.846645   31414 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:03:08.847168   31414 main.go:141] libmachine: Using API Version  1
	I0914 00:03:08.847187   31414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:03:08.847496   31414 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:03:08.847681   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.884738   31414 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:03:08.886032   31414 start.go:297] selected driver: kvm2
	I0914 00:03:08.886050   31414 start.go:901] validating driver "kvm2" against &{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:03:08.886192   31414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:03:08.886504   31414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:03:08.886573   31414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:03:08.902015   31414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:03:08.902673   31414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:03:08.902709   31414 cni.go:84] Creating CNI manager for ""
	I0914 00:03:08.902760   31414 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 00:03:08.902816   31414 start.go:340] cluster config:
	{Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:03:08.902950   31414 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:03:08.904470   31414 out.go:177] * Starting "ha-817269" primary control-plane node in "ha-817269" cluster
	I0914 00:03:08.905402   31414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:03:08.905432   31414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:03:08.905441   31414 cache.go:56] Caching tarball of preloaded images
	I0914 00:03:08.905524   31414 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:03:08.905644   31414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:03:08.905779   31414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/config.json ...
	I0914 00:03:08.905975   31414 start.go:360] acquireMachinesLock for ha-817269: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:03:08.906034   31414 start.go:364] duration metric: took 40.429µs to acquireMachinesLock for "ha-817269"
	I0914 00:03:08.906053   31414 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:03:08.906063   31414 fix.go:54] fixHost starting: 
	I0914 00:03:08.906345   31414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:03:08.906382   31414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:03:08.920479   31414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0914 00:03:08.920895   31414 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:03:08.921392   31414 main.go:141] libmachine: Using API Version  1
	I0914 00:03:08.921411   31414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:03:08.921701   31414 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:03:08.921860   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.921947   31414 main.go:141] libmachine: (ha-817269) Calling .GetState
	I0914 00:03:08.923348   31414 fix.go:112] recreateIfNeeded on ha-817269: state=Running err=<nil>
	W0914 00:03:08.923377   31414 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:03:08.925133   31414 out.go:177] * Updating the running kvm2 "ha-817269" VM ...
	I0914 00:03:08.926089   31414 machine.go:93] provisionDockerMachine start ...
	I0914 00:03:08.926110   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:03:08.926302   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:08.928469   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:08.928879   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:08.928911   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:08.929014   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:08.929188   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:08.929328   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:08.929445   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:08.929578   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:08.929758   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:08.929769   31414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:03:09.040884   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0914 00:03:09.040909   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.041157   31414 buildroot.go:166] provisioning hostname "ha-817269"
	I0914 00:03:09.041179   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.041379   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.044145   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.044560   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.044595   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.044724   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.044893   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.045079   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.045190   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.045335   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.045548   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.045573   31414 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-817269 && echo "ha-817269" | sudo tee /etc/hostname
	I0914 00:03:09.172681   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-817269
	
	I0914 00:03:09.172708   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.175711   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.176134   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.176163   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.176336   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.176527   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.176696   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.176824   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.176977   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.177183   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.177206   31414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-817269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-817269/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-817269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:03:09.288798   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:03:09.288836   31414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:03:09.288862   31414 buildroot.go:174] setting up certificates
	I0914 00:03:09.288873   31414 provision.go:84] configureAuth start
	I0914 00:03:09.288886   31414 main.go:141] libmachine: (ha-817269) Calling .GetMachineName
	I0914 00:03:09.289139   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:03:09.291699   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.292055   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.292082   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.292230   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.294300   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.294613   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.294638   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.294791   31414 provision.go:143] copyHostCerts
	I0914 00:03:09.294816   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:03:09.294852   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:03:09.294863   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:03:09.294936   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:03:09.295026   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:03:09.295049   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:03:09.295059   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:03:09.295095   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:03:09.295157   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:03:09.295182   31414 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:03:09.295191   31414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:03:09.295230   31414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:03:09.295314   31414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.ha-817269 san=[127.0.0.1 192.168.39.132 ha-817269 localhost minikube]
	I0914 00:03:09.377525   31414 provision.go:177] copyRemoteCerts
	I0914 00:03:09.377588   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:03:09.377613   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.380669   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.381037   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.381070   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.381274   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.381486   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.381665   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.381822   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:03:09.467365   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 00:03:09.467444   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:03:09.492394   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 00:03:09.492469   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0914 00:03:09.516523   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 00:03:09.516589   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:03:09.542239   31414 provision.go:87] duration metric: took 253.352545ms to configureAuth
	I0914 00:03:09.542267   31414 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:03:09.542549   31414 config.go:182] Loaded profile config "ha-817269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:03:09.542671   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:03:09.545457   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.545884   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:03:09.545920   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:03:09.546054   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:03:09.546239   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.546381   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:03:09.546501   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:03:09.546640   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:03:09.546852   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:03:09.546872   31414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:04:40.461972   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:04:40.462008   31414 machine.go:96] duration metric: took 1m31.535906473s to provisionDockerMachine
	I0914 00:04:40.462023   31414 start.go:293] postStartSetup for "ha-817269" (driver="kvm2")
	I0914 00:04:40.462037   31414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:04:40.462078   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.462383   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:04:40.462423   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.465839   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.466281   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.466324   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.466465   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.466644   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.466787   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.466955   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.550726   31414 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:04:40.554811   31414 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:04:40.554832   31414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:04:40.554898   31414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:04:40.554987   31414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:04:40.555000   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0914 00:04:40.555104   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:04:40.564108   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:04:40.587088   31414 start.go:296] duration metric: took 125.050457ms for postStartSetup
	I0914 00:04:40.587127   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.587400   31414 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0914 00:04:40.587424   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.589973   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.590385   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.590408   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.590619   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.590791   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.590904   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.591020   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	W0914 00:04:40.674713   31414 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0914 00:04:40.674735   31414 fix.go:56] duration metric: took 1m31.768673407s for fixHost
	I0914 00:04:40.674768   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.677173   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.677479   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.677512   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.677715   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.677866   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.677985   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.678091   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.678204   31414 main.go:141] libmachine: Using SSH client type: native
	I0914 00:04:40.678378   31414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0914 00:04:40.678399   31414 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:04:40.788409   31414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726272280.746681996
	
	I0914 00:04:40.788436   31414 fix.go:216] guest clock: 1726272280.746681996
	I0914 00:04:40.788446   31414 fix.go:229] Guest: 2024-09-14 00:04:40.746681996 +0000 UTC Remote: 2024-09-14 00:04:40.674753415 +0000 UTC m=+91.893799601 (delta=71.928581ms)
	I0914 00:04:40.788470   31414 fix.go:200] guest clock delta is within tolerance: 71.928581ms
	I0914 00:04:40.788477   31414 start.go:83] releasing machines lock for "ha-817269", held for 1m31.882431541s
	I0914 00:04:40.788501   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.788779   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:04:40.791546   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.791863   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.791888   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.792026   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792558   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792709   31414 main.go:141] libmachine: (ha-817269) Calling .DriverName
	I0914 00:04:40.792810   31414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:04:40.792846   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.792886   31414 ssh_runner.go:195] Run: cat /version.json
	I0914 00:04:40.792908   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHHostname
	I0914 00:04:40.795258   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795507   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795757   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.795798   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.795950   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.796084   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:40.796089   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.796109   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:40.796286   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHPort
	I0914 00:04:40.796309   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.796487   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHKeyPath
	I0914 00:04:40.796511   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.796605   31414 main.go:141] libmachine: (ha-817269) Calling .GetSSHUsername
	I0914 00:04:40.796752   31414 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/ha-817269/id_rsa Username:docker}
	I0914 00:04:40.908562   31414 ssh_runner.go:195] Run: systemctl --version
	I0914 00:04:40.914948   31414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:04:41.075274   31414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:04:41.081015   31414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:04:41.081120   31414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:04:41.090293   31414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 00:04:41.090332   31414 start.go:495] detecting cgroup driver to use...
	I0914 00:04:41.090393   31414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:04:41.107756   31414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:04:41.121864   31414 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:04:41.121914   31414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:04:41.135449   31414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:04:41.148796   31414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:04:41.295237   31414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:04:41.437117   31414 docker.go:233] disabling docker service ...
	I0914 00:04:41.437181   31414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:04:41.454439   31414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:04:41.468440   31414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:04:41.613226   31414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:04:41.759778   31414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:04:41.775373   31414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:04:41.794924   31414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:04:41.794984   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.805666   31414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:04:41.805732   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.816148   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.826366   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.836588   31414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:04:41.847992   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.859087   31414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.871087   31414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:04:41.882727   31414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:04:41.892798   31414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:04:41.902411   31414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:04:42.047830   31414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:04:42.812684   31414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:04:42.812760   31414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:04:42.818791   31414 start.go:563] Will wait 60s for crictl version
	I0914 00:04:42.818836   31414 ssh_runner.go:195] Run: which crictl
	I0914 00:04:42.822330   31414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:04:42.862126   31414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:04:42.862220   31414 ssh_runner.go:195] Run: crio --version
	I0914 00:04:42.890916   31414 ssh_runner.go:195] Run: crio --version
	I0914 00:04:42.919623   31414 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:04:42.920900   31414 main.go:141] libmachine: (ha-817269) Calling .GetIP
	I0914 00:04:42.923601   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:42.923967   31414 main.go:141] libmachine: (ha-817269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:63:b0", ip: ""} in network mk-ha-817269: {Iface:virbr1 ExpiryTime:2024-09-14 00:53:25 +0000 UTC Type:0 Mac:52:54:00:ff:63:b0 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-817269 Clientid:01:52:54:00:ff:63:b0}
	I0914 00:04:42.923995   31414 main.go:141] libmachine: (ha-817269) DBG | domain ha-817269 has defined IP address 192.168.39.132 and MAC address 52:54:00:ff:63:b0 in network mk-ha-817269
	I0914 00:04:42.924176   31414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:04:42.928787   31414 kubeadm.go:883] updating cluster {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:04:42.928923   31414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:04:42.928989   31414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:04:42.971731   31414 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:04:42.971755   31414 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:04:42.971828   31414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:04:43.010560   31414 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:04:43.010587   31414 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:04:43.010595   31414 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.31.1 crio true true} ...
	I0914 00:04:43.010688   31414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-817269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:04:43.010752   31414 ssh_runner.go:195] Run: crio config
	I0914 00:04:43.057397   31414 cni.go:84] Creating CNI manager for ""
	I0914 00:04:43.057421   31414 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 00:04:43.057433   31414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:04:43.057452   31414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-817269 NodeName:ha-817269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:04:43.057592   31414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-817269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:04:43.057623   31414 kube-vip.go:115] generating kube-vip config ...
	I0914 00:04:43.057664   31414 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 00:04:43.070251   31414 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 00:04:43.070366   31414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 00:04:43.070424   31414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:04:43.081580   31414 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:04:43.081649   31414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 00:04:43.091860   31414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0914 00:04:43.109118   31414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:04:43.126551   31414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0914 00:04:43.143531   31414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 00:04:43.159852   31414 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 00:04:43.165053   31414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:04:43.310897   31414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:04:43.326150   31414 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269 for IP: 192.168.39.132
	I0914 00:04:43.326182   31414 certs.go:194] generating shared ca certs ...
	I0914 00:04:43.326203   31414 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.326394   31414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:04:43.326444   31414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:04:43.326454   31414 certs.go:256] generating profile certs ...
	I0914 00:04:43.326531   31414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/client.key
	I0914 00:04:43.326566   31414 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427
	I0914 00:04:43.326583   31414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.6 192.168.39.68 192.168.39.254]
	I0914 00:04:43.445973   31414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 ...
	I0914 00:04:43.446007   31414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427: {Name:mk8b569386742ac48cb0304d4e3f1a765a9a2ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.446169   31414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427 ...
	I0914 00:04:43.446180   31414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427: {Name:mk073cdcbfc344b59cbade2545dc3d5aba23ec42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:04:43.446249   31414 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt.e8e29427 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt
	I0914 00:04:43.446396   31414 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key.e8e29427 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key
	I0914 00:04:43.446525   31414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key
	I0914 00:04:43.446541   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 00:04:43.446553   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 00:04:43.446563   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 00:04:43.446574   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 00:04:43.446584   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 00:04:43.446595   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 00:04:43.446607   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 00:04:43.446617   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 00:04:43.446665   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:04:43.446694   31414 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:04:43.446703   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:04:43.446726   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:04:43.446766   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:04:43.446792   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:04:43.446830   31414 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:04:43.446858   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.446872   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.446884   31414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.447434   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:04:43.475330   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:04:43.501806   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:04:43.526820   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:04:43.550661   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 00:04:43.575143   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:04:43.599733   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:04:43.624993   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/ha-817269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:04:43.649566   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:04:43.674794   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:04:43.701260   31414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:04:43.728385   31414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:04:43.754527   31414 ssh_runner.go:195] Run: openssl version
	I0914 00:04:43.763138   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:04:43.784506   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.789200   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.789258   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:04:43.794847   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:04:43.804441   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:04:43.815990   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.820707   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.820779   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:04:43.826497   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:04:43.836199   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:04:43.847502   31414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.852872   31414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.852962   31414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:04:43.859243   31414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:04:43.869668   31414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:04:43.875071   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:04:43.881321   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:04:43.887360   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:04:43.893013   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:04:43.898999   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:04:43.904830   31414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:04:43.910593   31414 kubeadm.go:392] StartCluster: {Name:ha-817269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-817269 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.248 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:04:43.910727   31414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:04:43.910807   31414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:04:43.952873   31414 cri.go:89] found id: "3cd2250ea2d88345c8496fcdb842fc6a061cc676bfc39889a7db66e56a7988f5"
	I0914 00:04:43.952898   31414 cri.go:89] found id: "dd0bff85390e25c3ea3d3294406935d67d03bee37a44a5812fbe70914bf0adcb"
	I0914 00:04:43.952907   31414 cri.go:89] found id: "2a723ee0b6b3e403960334ef20660530ed192a996a1c504ada3caf9b4b0b0258"
	I0914 00:04:43.952911   31414 cri.go:89] found id: "61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997"
	I0914 00:04:43.952914   31414 cri.go:89] found id: "4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94"
	I0914 00:04:43.952920   31414 cri.go:89] found id: "315adcde5c56f6470f3d52f4a241e02d4719ccd9a3896fe1c10e155ad9ac5ead"
	I0914 00:04:43.952923   31414 cri.go:89] found id: "b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e"
	I0914 00:04:43.952925   31414 cri.go:89] found id: "f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d"
	I0914 00:04:43.952927   31414 cri.go:89] found id: "2faad36b3b9a32253999fcf37ebad1c0105605972c685346e79dec3b10248bf5"
	I0914 00:04:43.952934   31414 cri.go:89] found id: "45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5"
	I0914 00:04:43.952938   31414 cri.go:89] found id: "33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a"
	I0914 00:04:43.952941   31414 cri.go:89] found id: "a72c7ed6fd0b97ad6e5b0a7952ef6ab3db2f9454997bd7e10c28e41e725e3d96"
	I0914 00:04:43.952943   31414 cri.go:89] found id: "11c2a11c941f9d736b3e503079cbbad5b0de0ff69bfaaa4c1ddee40caadd4e08"
	I0914 00:04:43.952946   31414 cri.go:89] found id: ""
	I0914 00:04:43.952986   31414 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.039192076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f87d4778-12f9-40a0-a9bc-8958e19c1716 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.040808392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=856e989f-762e-47d0-9a1e-780ddadf2e46 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.041419263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272596041393079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=856e989f-762e-47d0-9a1e-780ddadf2e46 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.042016096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07b30f09-0c8f-490d-a518-c717755f172c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.042082996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07b30f09-0c8f-490d-a518-c717755f172c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.042606338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07b30f09-0c8f-490d-a518-c717755f172c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.088599318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=085ddea7-f2cb-4eeb-bc64-3f50410b3ad9 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.088675842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=085ddea7-f2cb-4eeb-bc64-3f50410b3ad9 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.090748419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03cdfba1-e8b8-4be2-8477-62b0edcbe43f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.091361288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272596091325649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03cdfba1-e8b8-4be2-8477-62b0edcbe43f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.092037772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=786bb554-3a1e-4728-91a7-e22513d12249 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.092211059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=786bb554-3a1e-4728-91a7-e22513d12249 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.093722245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=786bb554-3a1e-4728-91a7-e22513d12249 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.145939530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd5a24ca-3c30-442d-a0bb-2fe713b40135 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.146045448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd5a24ca-3c30-442d-a0bb-2fe713b40135 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.147681744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edb091ff-d7ba-451d-b8d4-0c8bc0af6921 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.148414402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272596148386937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edb091ff-d7ba-451d-b8d4-0c8bc0af6921 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.149556691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc980c7e-cf7d-404a-9074-b01025104206 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.149651830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc980c7e-cf7d-404a-9074-b01025104206 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.150377265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc980c7e-cf7d-404a-9074-b01025104206 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.180692647Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0c0284a5-9204-4ab2-b06b-ae11dc1e4d40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.181469775Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-5cbmn,Uid:e288c7d7-36f3-4fd1-a944-403098141304,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272323582530155,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:56:30.842079246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-817269,Uid:b20cb16ce36ccdd5526620605e9605ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726272304191464951,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{kubernetes.io/config.hash: b20cb16ce36ccdd5526620605e9605ea,kubernetes.io/config.seen: 2024-09-14T00:04:43.119541501Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mwpbw,Uid:e19eb0be-8e26-4e88-824e-aaec9360bf6c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289875905096,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-13T23:54:08.899566051Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-817269,Uid:eda3685dd3d4be5c5da91818ed6f5c19,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289867847515,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eda3685dd3d4be5c5da91818ed6f5c19,kubernetes.io/config.seen: 2024-09-13T23:53:52.338163010Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&PodSandboxMetadata{Name:kube-proxy-p9lkl,Uid:cf9b3ec9-8ac8-468c-887e-3b572646d4db,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726272289852743083,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:53:56.574467871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-817269,Uid:cbd5cb5db01522f88f4d8c5e21684ad5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289841530933,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,tier: control-plane,},Annotations:map[string]string{ku
beadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.132:8443,kubernetes.io/config.hash: cbd5cb5db01522f88f4d8c5e21684ad5,kubernetes.io/config.seen: 2024-09-13T23:53:52.338159056Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&PodSandboxMetadata{Name:etcd-ha-817269,Uid:ed7dba6ff1cb1dff87cc0fe9bba89894,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289838384015,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.132:2379,kubernetes.io/config.hash: ed7dba6ff1cb1dff87cc0fe9bba89894,kubernetes.io/config.seen: 2024-09-13T23:53:52.338166821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSa
ndbox{Id:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rq5pv,Uid:34cd12c1-d279-4067-a290-be3af39ddf20,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289808582146,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:54:08.888588375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc88d524-adef-4f7a-ae34-c02a9d94b99d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289801880327,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,i
o.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-13T23:54:08.903498000Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&PodSandboxMetadata{Name:kindnet-dxj2g,Uid:5dd2f191-9de6-498e-9d86-7a355340f4a6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289790959573,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:53:56.582644109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-817269,Uid:0c577b2f163a5153f09183c3f12f62cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726272289779926211,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c577b2f163a5153f09183c3f12f62cf,kubernetes.io/config.seen: 2024-09-13T23:53:52.338164993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-5cbmn,Uid:e288c7d7-36f3-4fd1-a944-403098141304,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271791165946425,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:56:30.842079246Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mwpbw,Uid:e19eb0be-8e26-4e88-824e-aaec9360bf6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271649243615136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:54:08.899566051Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rq5pv,Uid:34cd12c1-d279-4067-a290-be3af39ddf20,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271649196873739,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:54:08.888588375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&PodSandboxMetadata{Name:kube-proxy-p9lkl,Uid:cf9b3ec9-8ac8-468c-887e-3b572646d4db,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271636899258050,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:53:56.574467871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&PodSandboxMetadata{Name:kindnet-dxj2g,Uid:5dd2f191-9de6-498e-9d86-7a355340f4a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271636893420490,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-13T23:53:56.582644109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-817269,Uid:0c577b2f163a5153f09183c3f12f62cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271625759554870,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c577b2f163a5153f09183c3f12f62cf,kubernetes.io/config.seen: 2024-09-13T23:53:45.265734288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&PodSandboxMetadata{Name:etcd-ha-817269,Uid:ed7dba6ff1cb1dff87cc0fe9bba89894,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726271625755082209,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.132:2379,kubernetes.io/config.hash: ed7dba6f
f1cb1dff87cc0fe9bba89894,kubernetes.io/config.seen: 2024-09-13T23:53:45.265727629Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0c0284a5-9204-4ab2-b06b-ae11dc1e4d40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.182670574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b2350c3-b0f5-479e-9d26-8e6f282d8ed2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.182752472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b2350c3-b0f5-479e-9d26-8e6f282d8ed2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:09:56 ha-817269 crio[3557]: time="2024-09-14 00:09:56.183575308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fd509e7c4da3c7f0e37570950f695a0b4cfeebff60de0f50d0b390cf4ca507c,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726272367379981844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726272336383033669,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726272335379072924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f19809f85754306b4f208e8766bb0c7d48c1033bf77b73c9c5dcc88d91fc45,PodSandboxId:19acc9fdd3bd4cc5dc08a502addf7f9a7a2b5c490b9e43fdfd34831c52137282,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726272324375914577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88d524-adef-4f7a-ae34-c02a9d94b99d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52647c00652ebc8a39929cb3e19bb35107dd8b5d13ff31bbcf7c3a0cb494933,PodSandboxId:14e0ffe6082979663bde15da1862b644d2d4a91213af170716b02b3bdab3a54f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726272323798822406,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed6377406153cd8bc77cb19d595048b4351581f062e7690bd45128b09bd2546f,PodSandboxId:f3da812aa6182d19a44d0022341e0b8df202be36ab5a9659e83515f39d5ff1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726272304278194672,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20cb16ce36ccdd5526620605e9605ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8,PodSandboxId:e62856cb4921c7d95a8650384a21cd28d880cbde318a0565697cdd1a5f7f0884,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726272291724657244,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32,PodSandboxId:8075147ed19b18ed2e30b5750666f80c91b80b818f1b2111fd7cd09f0c7ee7b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726272290476401752,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99eea984
6e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a,PodSandboxId:ad37d6b4ac39b2f8d05a18e213e0a109ce024a6f662a1835b97e61b890318c41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290523520671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29,PodSandboxId:02d05f666d12d7075839c533eeaf590dcb4058dceaa0f50d1059358c50a88cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726272290393059158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb,PodSandboxId:2aaae96708eb22941ad9e501525f4b0071404a4b9112be162030a9a3888f29a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726272290290736953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d,PodSandboxId:363fa8cab215a3b6602b91b7ee07e42de7e3741b657daf62ded7726cccaafdbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726272290240760230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbd5cb5db01522f88f4d8c
5e21684ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588,PodSandboxId:41d6b53cb5d9c941f72957e96b613a39c1f8527d47ccdd2085c093e8fc667267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726272290208446048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda3685dd3d4be5c5
da91818ed6f5c19,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e,PodSandboxId:b8362da423aa6f56a951b5bf0bc6d7a1bd6c04104d32c1cebfeede6bc1f484b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726272290145394526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3d244ad4c3085ea74ab200a558fb4a39befb3d1079eb3e52688e3886d3948b,PodSandboxId:2ff13b6745379a5cb9b75662d65e67de99ed83d64796c5a476aafe0e6af522d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726271794970239434,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-5cbmn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e288c7d7-36f3-4fd1-a944-403098141304,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94,PodSandboxId:8b315def4f6288adddcd4a402c4414c1be561a9f94fd9c2649284921eb285847,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649511901785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwpbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19eb0be-8e26-4e88-824e-aaec9360bf6c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997,PodSandboxId:36c20ca07db880f1f7a4241f24c38ba3adb55a0fe7b1af5429bee1cfe7dfa17f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726271649512728801,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd12c1-d279-4067-a290-be3af39ddf20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e,PodSandboxId:f453fe4fb77a32122264b87372b4caf98c24d01041b8ca43e6a2149777263743,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726271637501672629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxj2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd2f191-9de6-498e-9d86-7a355340f4a6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d,PodSandboxId:babdf5981ec862302b87e6080a3aa7a65ae53b5af07b9760d90f1372d5488c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726271637232003889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf9b3ec9-8ac8-468c-887e-3b572646d4db,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5,PodSandboxId:eccbef0ef4d20a8c8698036ff8b9fff3036065ce2820a834ecd7a69d98df90f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726271626010029425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c577b2f163a5153f09183c3f12f62cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a,PodSandboxId:0ea1c016c25f7772dda59dac353a7016d4929adc03ac339ced0d20a23d8c60ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726271625986587779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-817269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7dba6ff1cb1dff87cc0fe9bba89894,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b2350c3-b0f5-479e-9d26-8e6f282d8ed2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0fd509e7c4da3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   19acc9fdd3bd4       storage-provisioner
	95b4d7f4a781a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   41d6b53cb5d9c       kube-controller-manager-ha-817269
	c1923ec759795       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   363fa8cab215a       kube-apiserver-ha-817269
	a9f19809f8575       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   19acc9fdd3bd4       storage-provisioner
	d52647c00652e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   14e0ffe608297       busybox-7dff88458-5cbmn
	ed6377406153c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   f3da812aa6182       kube-vip-ha-817269
	3b8be9d7ef173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   e62856cb4921c       kube-proxy-p9lkl
	99eea9846e2a3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   ad37d6b4ac39b       coredns-7c65d6cfc9-mwpbw
	febbe47268729       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   8075147ed19b1       kindnet-dxj2g
	7a85a86036d4e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   02d05f666d12d       etcd-ha-817269
	acc0f4c63f717       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   2aaae96708eb2       coredns-7c65d6cfc9-rq5pv
	1eb000680b819       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   363fa8cab215a       kube-apiserver-ha-817269
	fcffbcbfeb991       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   41d6b53cb5d9c       kube-controller-manager-ha-817269
	c73accba880ce       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   b8362da423aa6       kube-scheduler-ha-817269
	4c3d244ad4c30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   2ff13b6745379       busybox-7dff88458-5cbmn
	61abb6eb65e46       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   36c20ca07db88       coredns-7c65d6cfc9-rq5pv
	4ce76346be5b3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   8b315def4f628       coredns-7c65d6cfc9-mwpbw
	b992c3b895609       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   f453fe4fb77a3       kindnet-dxj2g
	f8f2322f127fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   babdf5981ec86       kube-proxy-p9lkl
	45371c7b7dce4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   eccbef0ef4d20       kube-scheduler-ha-817269
	33ac2ce16b58b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   0ea1c016c25f7       etcd-ha-817269
	
	
	==> coredns [4ce76346be5b3afc07e4c9e50281decb9a036483a99065e89edddf9355dbcd94] <==
	[INFO] 10.244.2.2:53222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165792s
	[INFO] 10.244.2.2:51300 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207369s
	[INFO] 10.244.2.2:56912 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110533s
	[INFO] 10.244.2.2:37804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204459s
	[INFO] 10.244.1.2:54436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226539s
	[INFO] 10.244.1.2:56082 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001819826s
	[INFO] 10.244.1.2:58316 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222276s
	[INFO] 10.244.1.2:42306 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083319s
	[INFO] 10.244.0.4:53876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020116s
	[INFO] 10.244.0.4:56768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013293s
	[INFO] 10.244.0.4:47653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.0.4:50365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154019s
	[INFO] 10.244.2.2:56862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195398s
	[INFO] 10.244.2.2:40784 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189124s
	[INFO] 10.244.2.2:42797 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106937s
	[INFO] 10.244.1.2:49876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246067s
	[INFO] 10.244.0.4:44026 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299901s
	[INFO] 10.244.0.4:40123 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000233032s
	[INFO] 10.244.1.2:42204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000500811s
	[INFO] 10.244.1.2:44587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205062s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1861&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1859&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1864&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [61abb6eb65e46d1e5460a22b0d9f81b21ab699506da8b149c6b2c5c812ff7997] <==
	[INFO] 10.244.0.4:39998 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005418524s
	[INFO] 10.244.0.4:57052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284617s
	[INFO] 10.244.0.4:59585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149604s
	[INFO] 10.244.2.2:44013 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0019193s
	[INFO] 10.244.2.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022048s
	[INFO] 10.244.2.2:33172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001513908s
	[INFO] 10.244.1.2:35965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790224s
	[INFO] 10.244.1.2:42555 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000321828s
	[INFO] 10.244.1.2:54761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123494s
	[INFO] 10.244.1.2:51742 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176208s
	[INFO] 10.244.2.2:55439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172115s
	[INFO] 10.244.1.2:32823 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000209293s
	[INFO] 10.244.1.2:54911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191869s
	[INFO] 10.244.1.2:45538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090559s
	[INFO] 10.244.0.4:51099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293009s
	[INFO] 10.244.0.4:52402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204563s
	[INFO] 10.244.2.2:48710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318957s
	[INFO] 10.244.2.2:51855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124089s
	[INFO] 10.244.2.2:54763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000257295s
	[INFO] 10.244.2.2:56836 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186617s
	[INFO] 10.244.1.2:45824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223466s
	[INFO] 10.244.1.2:32974 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143816s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1864&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [99eea9846e2a34a2bc8968561415c8c345b5a567276a49a63a27e54d5095118a] <==
	Trace[926961384]: [10.001431094s] [10.001431094s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59914->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [acc0f4c63f7173184c261ac162486aa0b02379533434ba1079a343104c8b30eb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50248->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50248->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49256->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49256->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-817269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_53_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:53:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:09:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:05:42 +0000   Fri, 13 Sep 2024 23:54:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-817269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bc026746bcc47d49a7f508137c16c0a
	  System UUID:                0bc02674-6bcc-47d4-9a7f-508137c16c0a
	  Boot ID:                    1a383d96-7a2a-4a67-94ca-0f262bc14568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5cbmn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-mwpbw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-rq5pv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-817269                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-dxj2g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-817269             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-817269    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-p9lkl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-817269             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-817269                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-817269 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-817269 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-817269 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-817269 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Warning  ContainerGCFailed        6m4s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m28s (x3 over 6m17s)  kubelet          Node ha-817269 status is now: NodeNotReady
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-817269 event: Registered Node ha-817269 in Controller
	
	
	Name:               ha-817269-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_54_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:54:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:09:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:06:16 +0000   Sat, 14 Sep 2024 00:05:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-817269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 260fc9ca7fe3421fbf6de250d4218230
	  System UUID:                260fc9ca-7fe3-421f-bf6d-e250d4218230
	  Boot ID:                    eee86d22-fdc7-4135-a072-8893326e7e42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wff9f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-817269-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-qcfqk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-817269-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-817269-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-7t9b2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-817269-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-817269-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-817269-m02 status is now: NodeNotReady
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-817269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-817269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-817269-m02 event: Registered Node ha-817269-m02 in Controller
	
	
	Name:               ha-817269-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-817269-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-817269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_13T23_57_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:57:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-817269-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:07:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:08:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:08:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:08:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 00:07:09 +0000   Sat, 14 Sep 2024 00:08:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-817269-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5153efd89a4042b8870c772e8476a0
	  System UUID:                ca5153ef-d89a-4042-b887-0c772e8476a0
	  Boot ID:                    3c4fbdb7-4778-4101-b683-94856a940de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8rhht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-45h44              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-b8pch           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-817269-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   NodeNotReady             3m42s                  node-controller  Node ha-817269-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-817269-m04 event: Registered Node ha-817269-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-817269-m04 has been rebooted, boot id: 3c4fbdb7-4778-4101-b683-94856a940de0
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-817269-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-817269-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-817269-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-817269-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.741095] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.067049] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057264] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.183859] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.112834] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.262666] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.811628] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.142511] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066169] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.376137] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.080016] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.054020] kauditd_printk_skb: 26 callbacks suppressed
	[Sep13 23:54] kauditd_printk_skb: 35 callbacks suppressed
	[ +43.648430] kauditd_printk_skb: 24 callbacks suppressed
	[Sep14 00:04] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
	[  +0.142916] systemd-fstab-generator[3493]: Ignoring "noauto" option for root device
	[  +0.175540] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.151389] systemd-fstab-generator[3519]: Ignoring "noauto" option for root device
	[  +0.284940] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +1.260334] systemd-fstab-generator[3643]: Ignoring "noauto" option for root device
	[  +6.617405] kauditd_printk_skb: 122 callbacks suppressed
	[Sep14 00:05] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.052009] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.843404] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.725362] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [33ac2ce16b58ba4f7697cb6d6477c8584cc225b00a47c7fa3fb88f1c6fc7cd5a] <==
	{"level":"warn","ts":"2024-09-14T00:03:09.690327Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:03:01.840606Z","time spent":"7.825688811s","remote":"127.0.0.1:42408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	2024/09/14 00:03:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-14T00:03:09.754281Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.132:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:03:09.754342Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.132:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:03:09.754460Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6ae81251a1433dae","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-14T00:03:09.754772Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.754943Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755143Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755287Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755358Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755541Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755636Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"80ad799dd55374fd"}
	{"level":"info","ts":"2024-09-14T00:03:09.755714Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755832Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755937Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.755983Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.756015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.756043Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:03:09.759756Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.132:2380"}
	{"level":"warn","ts":"2024-09-14T00:03:09.759860Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.927911962s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-14T00:03:09.759925Z","caller":"traceutil/trace.go:171","msg":"trace[286015042] range","detail":"{range_begin:; range_end:; }","duration":"8.927983849s","start":"2024-09-14T00:03:00.831925Z","end":"2024-09-14T00:03:09.759909Z","steps":["trace[286015042] 'agreement among raft nodes before linearized reading'  (duration: 8.927911186s)"],"step_count":1}
	{"level":"error","ts":"2024-09-14T00:03:09.759977Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-14T00:03:09.759974Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.132:2380"}
	{"level":"info","ts":"2024-09-14T00:03:09.760064Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-817269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.132:2380"],"advertise-client-urls":["https://192.168.39.132:2379"]}
	
	
	==> etcd [7a85a86036d4e7490ba8b7ea2747642d6a60e936449c65ea0d7840d2be333f29] <==
	{"level":"info","ts":"2024-09-14T00:06:28.770589Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.770710Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.785676Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.786411Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ae81251a1433dae","to":"9bbbc15eae36a36d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-14T00:06:28.786457Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:06:28.788841Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ae81251a1433dae","to":"9bbbc15eae36a36d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-14T00:06:28.789062Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.805048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ae81251a1433dae switched to configuration voters=(7703427304424422830 9272200926621562109)"}
	{"level":"info","ts":"2024-09-14T00:07:22.807658Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"62c3bb1b3a9c43e2","local-member-id":"6ae81251a1433dae","removed-remote-peer-id":"9bbbc15eae36a36d","removed-remote-peer-urls":["https://192.168.39.68:2380"]}
	{"level":"info","ts":"2024-09-14T00:07:22.807846Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.808271Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.808458Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.808833Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.809058Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.809425Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.809847Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","error":"context canceled"}
	{"level":"warn","ts":"2024-09-14T00:07:22.810436Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9bbbc15eae36a36d","error":"failed to read 9bbbc15eae36a36d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-14T00:07:22.810574Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.810840Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d","error":"context canceled"}
	{"level":"info","ts":"2024-09-14T00:07:22.810961Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6ae81251a1433dae","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.811052Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.811181Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6ae81251a1433dae","removed-remote-peer-id":"9bbbc15eae36a36d"}
	{"level":"info","ts":"2024-09-14T00:07:22.811297Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"6ae81251a1433dae","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.820777Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6ae81251a1433dae","remote-peer-id-stream-handler":"6ae81251a1433dae","remote-peer-id-from":"9bbbc15eae36a36d"}
	{"level":"warn","ts":"2024-09-14T00:07:22.824880Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6ae81251a1433dae","remote-peer-id-stream-handler":"6ae81251a1433dae","remote-peer-id-from":"9bbbc15eae36a36d"}
	
	
	==> kernel <==
	 00:09:56 up 16 min,  0 users,  load average: 0.22, 0.44, 0.31
	Linux ha-817269 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b992c3b89560947ac63436b6f2a7eca4a068ed6daa35aec2be23e2c7bb1c9c9e] <==
	I0914 00:02:48.534383       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:02:48.534523       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:02:48.534758       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:02:48.534824       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:02:48.534948       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:02:48.534990       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:02:48.535144       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:02:48.535189       1 main.go:299] handling current node
	I0914 00:02:58.534896       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:02:58.535076       1 main.go:299] handling current node
	I0914 00:02:58.535196       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:02:58.535221       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:02:58.535448       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:02:58.535477       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:02:58.535567       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:02:58.535588       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	E0914 00:03:00.216891       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1841&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0914 00:03:08.535462       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:03:08.535561       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:03:08.535792       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0914 00:03:08.535820       1 main.go:322] Node ha-817269-m03 has CIDR [10.244.2.0/24] 
	I0914 00:03:08.535880       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:03:08.535899       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:03:08.536064       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:03:08.536165       1 main.go:299] handling current node
	
	
	==> kindnet [febbe472687292f5cc7f2d04803064db88b85c60d7ca4159a770e3cdabd52f32] <==
	I0914 00:09:11.586600       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:09:21.581323       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:09:21.581444       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:09:21.581587       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:09:21.581610       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:09:21.581742       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:09:21.581771       1 main.go:299] handling current node
	I0914 00:09:31.586023       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:09:31.586138       1 main.go:299] handling current node
	I0914 00:09:31.586158       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:09:31.586168       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:09:31.586323       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:09:31.586346       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:09:41.585191       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:09:41.585329       1 main.go:299] handling current node
	I0914 00:09:41.585369       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:09:41.585390       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:09:41.585529       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:09:41.585553       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	I0914 00:09:51.576907       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0914 00:09:51.577020       1 main.go:299] handling current node
	I0914 00:09:51.577057       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0914 00:09:51.577076       1 main.go:322] Node ha-817269-m02 has CIDR [10.244.1.0/24] 
	I0914 00:09:51.577278       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0914 00:09:51.577304       1 main.go:322] Node ha-817269-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1eb000680b819a19fc8b4bf26d345afd7f6c545edead4364827c6d3984d0552d] <==
	I0914 00:04:50.735817       1 options.go:228] external host was not specified, using 192.168.39.132
	I0914 00:04:50.750351       1 server.go:142] Version: v1.31.1
	I0914 00:04:50.750421       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:04:51.642309       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0914 00:04:51.652164       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:04:51.658522       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0914 00:04:51.658599       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0914 00:04:51.658859       1 instance.go:232] Using reconciler: lease
	W0914 00:05:11.639461       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0914 00:05:11.639604       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0914 00:05:11.661619       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c1923ec7597957ceaf72efc38a5cb3e6c46425dd1a83ad291e114f348336c260] <==
	I0914 00:05:37.538507       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0914 00:05:37.538599       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0914 00:05:37.629602       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 00:05:37.637732       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:05:37.637766       1 policy_source.go:224] refreshing policies
	I0914 00:05:37.644537       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0914 00:05:37.648914       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6 192.168.39.68]
	I0914 00:05:37.649057       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 00:05:37.649229       1 aggregator.go:171] initial CRD sync complete...
	I0914 00:05:37.649277       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 00:05:37.649287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 00:05:37.649295       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:05:37.650861       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 00:05:37.651319       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 00:05:37.651644       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 00:05:37.652948       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 00:05:37.653293       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 00:05:37.653324       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 00:05:37.653444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:05:37.659605       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0914 00:05:37.663068       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0914 00:05:37.665970       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 00:05:37.726769       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:05:38.557294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 00:05:38.981830       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132 192.168.39.6 192.168.39.68]
	
	
	==> kube-controller-manager [95b4d7f4a781a83a02d7bec79e846ad4708ded3fa409efe9040288829e5d1994] <==
	I0914 00:08:11.007875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:08:11.026081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:08:11.094425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.832676ms"
	I0914 00:08:11.094738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.455µs"
	I0914 00:08:14.432805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	I0914 00:08:16.165641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-817269-m04"
	E0914 00:08:20.970239       1 gc_controller.go:151] "Failed to get node" err="node \"ha-817269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-817269-m03"
	E0914 00:08:20.970272       1 gc_controller.go:151] "Failed to get node" err="node \"ha-817269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-817269-m03"
	E0914 00:08:20.970279       1 gc_controller.go:151] "Failed to get node" err="node \"ha-817269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-817269-m03"
	E0914 00:08:20.970283       1 gc_controller.go:151] "Failed to get node" err="node \"ha-817269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-817269-m03"
	E0914 00:08:20.970288       1 gc_controller.go:151] "Failed to get node" err="node \"ha-817269-m03\" not found" logger="pod-garbage-collector-controller" node="ha-817269-m03"
	I0914 00:08:20.984850       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bwr6g"
	I0914 00:08:21.069431       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bwr6g"
	I0914 00:08:21.069514       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-817269-m03"
	I0914 00:08:21.106540       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-817269-m03"
	I0914 00:08:21.106645       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-817269-m03"
	I0914 00:08:21.138335       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-817269-m03"
	I0914 00:08:21.138370       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-817269-m03"
	I0914 00:08:21.170175       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-817269-m03"
	I0914 00:08:21.170279       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-817269-m03"
	I0914 00:08:21.201927       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-817269-m03"
	I0914 00:08:21.202293       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-817269-m03"
	I0914 00:08:21.232326       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-817269-m03"
	I0914 00:08:21.232361       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-np2s8"
	I0914 00:08:21.268693       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-np2s8"
	
	
	==> kube-controller-manager [fcffbcbfeb991db123a6319ebc3242e41bfe763af0202901b93e71e9328d4588] <==
	I0914 00:04:51.614359       1 serving.go:386] Generated self-signed cert in-memory
	I0914 00:04:52.182194       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 00:04:52.182231       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:04:52.186316       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 00:04:52.186814       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 00:04:52.187007       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 00:04:52.187285       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 00:05:12.669897       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.132:8443/healthz\": dial tcp 192.168.39.132:8443: connect: connection refused"
	
	
	==> kube-proxy [3b8be9d7ef173a0937543598feae62bb15799d83aa5a6be4f2a5ec5908b928d8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:04:54.966811       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:04:58.038963       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:01.111924       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:07.256381       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:16.470637       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 00:05:34.902911       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-817269\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0914 00:05:34.903145       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0914 00:05:34.903285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:05:34.936601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:05:34.936656       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:05:34.936687       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:05:34.939049       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:05:34.939561       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:05:34.939615       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:05:34.941337       1 config.go:199] "Starting service config controller"
	I0914 00:05:34.941412       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:05:34.941468       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:05:34.941485       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:05:34.942258       1 config.go:328] "Starting node config controller"
	I0914 00:05:34.942302       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:05:35.941567       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:05:35.941837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:05:35.942604       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8f2322f127fb77ebbd25a0f7d28a5e60798401a3639e4302d9b5ee67a478c1d] <==
	E0914 00:01:51.798726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:51.798688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:51.798800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:51.798882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:51.798951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:59.286566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	W0914 00:01:59.286709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:59.287429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:01:59.286645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:01:59.287564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0914 00:01:59.286712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:09.080353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	W0914 00:02:09.079945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:09.081225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0914 00:02:09.081174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:09.081523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:09.081616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:24.438902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:24.439191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:33.655450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:33.655620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1757\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:02:36.728544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:02:36.728734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 00:03:07.447601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 00:03:07.447703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-817269&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [45371c7b7dce4d7c1fdfb3312012bb25b6ee9b3eafa816784f98c080345088e5] <==
	E0913 23:56:30.847268       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wff9f\": pod busybox-7dff88458-wff9f is already assigned to node \"ha-817269-m02\"" pod="default/busybox-7dff88458-wff9f"
	E0913 23:56:30.906194       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:56:30.906282       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e288c7d7-36f3-4fd1-a944-403098141304(default/busybox-7dff88458-5cbmn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5cbmn"
	E0913 23:56:30.906305       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5cbmn\": pod busybox-7dff88458-5cbmn is already assigned to node \"ha-817269\"" pod="default/busybox-7dff88458-5cbmn"
	I0913 23:56:30.906349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5cbmn" node="ha-817269"
	E0913 23:57:08.751565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0913 23:57:08.751687       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 234c68a0-c2e4-4784-8bda-6c0a1ffc84db(kube-system/kube-proxy-tdcn8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tdcn8"
	E0913 23:57:08.751719       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdcn8\": pod kube-proxy-tdcn8 is already assigned to node \"ha-817269-m04\"" pod="kube-system/kube-proxy-tdcn8"
	I0913 23:57:08.751751       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tdcn8" node="ha-817269-m04"
	E0914 00:02:54.572065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0914 00:02:56.475040       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0914 00:02:56.618334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0914 00:02:58.116858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0914 00:02:59.284667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0914 00:02:59.656135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0914 00:02:59.874994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0914 00:03:00.576217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0914 00:03:00.695879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0914 00:03:00.864188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0914 00:03:01.453705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0914 00:03:01.821481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0914 00:03:03.366617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0914 00:03:05.265365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0914 00:03:05.362409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0914 00:03:09.649615       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c73accba880ce1bfa740ea479dde8a034640887e94dad230bc80764b4532534e] <==
	W0914 00:05:31.121908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.132:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.122030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.132:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.506592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.132:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.506704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.132:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.871223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.132:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.871281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.132:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:31.881048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.132:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:31.881181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.132:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.101270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.132:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.101323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.132:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.632634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.132:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.632720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.132:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:32.675686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.132:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:32.676138       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.132:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:33.126707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.132:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:33.126786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.132:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:34.692803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.132:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:34.692929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.132:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	W0914 00:05:35.401005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.132:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.132:8443: connect: connection refused
	E0914 00:05:35.401073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.132:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.132:8443: connect: connection refused" logger="UnhandledError"
	I0914 00:05:55.381748       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 00:07:19.526531       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-d9cxm\": pod busybox-7dff88458-d9cxm is already assigned to node \"ha-817269-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-d9cxm" node="ha-817269-m04"
	E0914 00:07:19.526644       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3e476ef0-1cb1-4604-a4d8-e08e3f398830(default/busybox-7dff88458-d9cxm) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-d9cxm"
	E0914 00:07:19.526673       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-d9cxm\": pod busybox-7dff88458-d9cxm is already assigned to node \"ha-817269-m04\"" pod="default/busybox-7dff88458-d9cxm"
	I0914 00:07:19.526696       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-d9cxm" node="ha-817269-m04"
	
	
	==> kubelet <==
	Sep 14 00:08:42 ha-817269 kubelet[1306]: E0914 00:08:42.628658    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272522626712714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:08:52 ha-817269 kubelet[1306]: E0914 00:08:52.380809    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:08:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:08:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:08:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:08:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:08:52 ha-817269 kubelet[1306]: E0914 00:08:52.631355    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272532631017066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:08:52 ha-817269 kubelet[1306]: E0914 00:08:52.631409    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272532631017066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:02 ha-817269 kubelet[1306]: E0914 00:09:02.633767    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272542632299619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:02 ha-817269 kubelet[1306]: E0914 00:09:02.633847    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272542632299619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:12 ha-817269 kubelet[1306]: E0914 00:09:12.635304    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272552635032602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:12 ha-817269 kubelet[1306]: E0914 00:09:12.635345    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272552635032602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:22 ha-817269 kubelet[1306]: E0914 00:09:22.636610    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272562636162173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:22 ha-817269 kubelet[1306]: E0914 00:09:22.636637    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272562636162173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:32 ha-817269 kubelet[1306]: E0914 00:09:32.643800    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272572643161830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:32 ha-817269 kubelet[1306]: E0914 00:09:32.644866    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272572643161830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:42 ha-817269 kubelet[1306]: E0914 00:09:42.646382    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272582645648619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:42 ha-817269 kubelet[1306]: E0914 00:09:42.647245    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272582645648619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:52 ha-817269 kubelet[1306]: E0914 00:09:52.379759    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:09:52 ha-817269 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:09:52 ha-817269 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:09:52 ha-817269 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:09:52 ha-817269 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:09:52 ha-817269 kubelet[1306]: E0914 00:09:52.649894    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272592649448343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:09:52 ha-817269 kubelet[1306]: E0914 00:09:52.649932    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726272592649448343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:09:55.721030   33818 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19640-5422/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-817269 -n ha-817269
helpers_test.go:261: (dbg) Run:  kubectl --context ha-817269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209237
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-209237
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-209237: exit status 82 (2m1.812443373s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-209237-m03"  ...
	* Stopping node "multinode-209237-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-209237" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209237 --wait=true -v=8 --alsologtostderr
E0914 00:27:20.625838   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:29:31.534814   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:30:23.690899   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209237 --wait=true -v=8 --alsologtostderr: (3m19.787433459s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209237
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-209237 -n multinode-209237
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-209237 logs -n 25: (1.38772627s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237:/home/docker/cp-test_multinode-209237-m02_multinode-209237.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237 sudo cat                                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m02_multinode-209237.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03:/home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237-m03 sudo cat                                   | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp testdata/cp-test.txt                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237:/home/docker/cp-test_multinode-209237-m03_multinode-209237.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237 sudo cat                                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02:/home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237-m02 sudo cat                                   | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-209237 node stop m03                                                          | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	| node    | multinode-209237 node start                                                             | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| stop    | -p multinode-209237                                                                     | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| start   | -p multinode-209237                                                                     | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:27 UTC | 14 Sep 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:27:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:27:11.604178   43671 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:27:11.604295   43671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:27:11.604310   43671 out.go:358] Setting ErrFile to fd 2...
	I0914 00:27:11.604317   43671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:27:11.604511   43671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:27:11.605031   43671 out.go:352] Setting JSON to false
	I0914 00:27:11.605930   43671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4178,"bootTime":1726269454,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:27:11.606018   43671 start.go:139] virtualization: kvm guest
	I0914 00:27:11.608022   43671 out.go:177] * [multinode-209237] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:27:11.609378   43671 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:27:11.609454   43671 notify.go:220] Checking for updates...
	I0914 00:27:11.611401   43671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:27:11.612455   43671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:27:11.613489   43671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:27:11.614402   43671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:27:11.615386   43671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:27:11.616702   43671 config.go:182] Loaded profile config "multinode-209237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:27:11.616834   43671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:27:11.617274   43671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:27:11.617338   43671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:27:11.632610   43671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0914 00:27:11.633091   43671 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:27:11.633678   43671 main.go:141] libmachine: Using API Version  1
	I0914 00:27:11.633703   43671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:27:11.634064   43671 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:27:11.634252   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.672431   43671 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:27:11.673559   43671 start.go:297] selected driver: kvm2
	I0914 00:27:11.673576   43671 start.go:901] validating driver "kvm2" against &{Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:27:11.673705   43671 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:27:11.674003   43671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:27:11.674071   43671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:27:11.689167   43671 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:27:11.689871   43671 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:27:11.689904   43671 cni.go:84] Creating CNI manager for ""
	I0914 00:27:11.689960   43671 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 00:27:11.690049   43671 start.go:340] cluster config:
	{Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:27:11.690180   43671 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:27:11.692040   43671 out.go:177] * Starting "multinode-209237" primary control-plane node in "multinode-209237" cluster
	I0914 00:27:11.693130   43671 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:27:11.693171   43671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:27:11.693184   43671 cache.go:56] Caching tarball of preloaded images
	I0914 00:27:11.693296   43671 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:27:11.693309   43671 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:27:11.693438   43671 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/config.json ...
	I0914 00:27:11.693659   43671 start.go:360] acquireMachinesLock for multinode-209237: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:27:11.693704   43671 start.go:364] duration metric: took 26.378µs to acquireMachinesLock for "multinode-209237"
	I0914 00:27:11.693731   43671 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:27:11.693740   43671 fix.go:54] fixHost starting: 
	I0914 00:27:11.693995   43671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:27:11.694026   43671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:27:11.709670   43671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0914 00:27:11.710205   43671 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:27:11.710692   43671 main.go:141] libmachine: Using API Version  1
	I0914 00:27:11.710716   43671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:27:11.711060   43671 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:27:11.711288   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.711429   43671 main.go:141] libmachine: (multinode-209237) Calling .GetState
	I0914 00:27:11.713009   43671 fix.go:112] recreateIfNeeded on multinode-209237: state=Running err=<nil>
	W0914 00:27:11.713037   43671 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:27:11.714723   43671 out.go:177] * Updating the running kvm2 "multinode-209237" VM ...
	I0914 00:27:11.715830   43671 machine.go:93] provisionDockerMachine start ...
	I0914 00:27:11.715849   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.716006   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.718829   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.719259   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.719283   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.719394   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.719541   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.719694   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.719818   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.719941   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.720152   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.720169   43671 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:27:11.824597   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209237
	
	I0914 00:27:11.824621   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:11.824854   43671 buildroot.go:166] provisioning hostname "multinode-209237"
	I0914 00:27:11.824887   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:11.825057   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.827842   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.828251   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.828273   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.828435   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.828623   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.828902   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.829028   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.829153   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.829328   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.829340   43671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-209237 && echo "multinode-209237" | sudo tee /etc/hostname
	I0914 00:27:11.943296   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209237
	
	I0914 00:27:11.943338   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.945897   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.946220   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.946252   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.946427   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.946602   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.946764   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.946900   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.947050   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.947283   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.947301   43671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-209237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-209237/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-209237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:27:12.048416   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:27:12.048445   43671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:27:12.048480   43671 buildroot.go:174] setting up certificates
	I0914 00:27:12.048489   43671 provision.go:84] configureAuth start
	I0914 00:27:12.048502   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:12.048785   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:27:12.051597   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.052025   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.052069   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.052152   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.054562   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.054917   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.054943   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.055090   43671 provision.go:143] copyHostCerts
	I0914 00:27:12.055126   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:27:12.055154   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:27:12.055165   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:27:12.055235   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:27:12.055338   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:27:12.055357   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:27:12.055361   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:27:12.055385   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:27:12.055447   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:27:12.055463   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:27:12.055468   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:27:12.055505   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:27:12.055567   43671 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.multinode-209237 san=[127.0.0.1 192.168.39.214 localhost minikube multinode-209237]
	I0914 00:27:12.137208   43671 provision.go:177] copyRemoteCerts
	I0914 00:27:12.137289   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:27:12.137322   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.140041   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.140403   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.140430   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.140645   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:12.140804   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.140961   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:12.141082   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:27:12.222530   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 00:27:12.222614   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:27:12.248391   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 00:27:12.248457   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 00:27:12.274271   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 00:27:12.274342   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 00:27:12.299419   43671 provision.go:87] duration metric: took 250.913735ms to configureAuth
	I0914 00:27:12.299451   43671 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:27:12.299767   43671 config.go:182] Loaded profile config "multinode-209237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:27:12.299877   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.302632   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.302993   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.303029   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.303183   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:12.303384   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.303525   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.303657   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:12.303848   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:12.304060   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:12.304081   43671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:28:43.095199   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:28:43.095232   43671 machine.go:96] duration metric: took 1m31.379388341s to provisionDockerMachine
	I0914 00:28:43.095245   43671 start.go:293] postStartSetup for "multinode-209237" (driver="kvm2")
	I0914 00:28:43.095257   43671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:28:43.095277   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.095611   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:28:43.095646   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.098711   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.099117   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.099148   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.099295   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.099464   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.099620   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.099760   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.187315   43671 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:28:43.191225   43671 command_runner.go:130] > NAME=Buildroot
	I0914 00:28:43.191245   43671 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0914 00:28:43.191249   43671 command_runner.go:130] > ID=buildroot
	I0914 00:28:43.191254   43671 command_runner.go:130] > VERSION_ID=2023.02.9
	I0914 00:28:43.191258   43671 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0914 00:28:43.191295   43671 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:28:43.191314   43671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:28:43.191378   43671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:28:43.191458   43671 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:28:43.191470   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0914 00:28:43.191549   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:28:43.200932   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:28:43.223950   43671 start.go:296] duration metric: took 128.692129ms for postStartSetup
	I0914 00:28:43.223988   43671 fix.go:56] duration metric: took 1m31.530248803s for fixHost
	I0914 00:28:43.224008   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.227098   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.227550   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.227580   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.227730   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.228027   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.228181   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.228337   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.228480   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:28:43.228685   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:28:43.228695   43671 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:28:43.328407   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726273723.305191186
	
	I0914 00:28:43.328429   43671 fix.go:216] guest clock: 1726273723.305191186
	I0914 00:28:43.328438   43671 fix.go:229] Guest: 2024-09-14 00:28:43.305191186 +0000 UTC Remote: 2024-09-14 00:28:43.22399252 +0000 UTC m=+91.654910852 (delta=81.198666ms)
	I0914 00:28:43.328477   43671 fix.go:200] guest clock delta is within tolerance: 81.198666ms
	I0914 00:28:43.328492   43671 start.go:83] releasing machines lock for "multinode-209237", held for 1m31.63477863s
	I0914 00:28:43.328513   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.328820   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:28:43.331920   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.332287   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.332307   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.332519   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333145   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333333   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333430   43671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:28:43.333483   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.333553   43671 ssh_runner.go:195] Run: cat /version.json
	I0914 00:28:43.333578   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.336090   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336390   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336472   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.336508   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336672   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.336785   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.336814   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336815   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.336954   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.336963   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.337129   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.337162   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.337291   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.337428   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.413229   43671 command_runner.go:130] > {"iso_version": "v1.34.0-1726243933-19640", "kicbase_version": "v0.0.45-1726193793-19634", "minikube_version": "v1.34.0", "commit": "e7c5cc0da7d849951636fa2daac0332e4074a4f1"}
	I0914 00:28:43.413390   43671 ssh_runner.go:195] Run: systemctl --version
	I0914 00:28:43.452484   43671 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 00:28:43.452532   43671 command_runner.go:130] > systemd 252 (252)
	I0914 00:28:43.452563   43671 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0914 00:28:43.452620   43671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:28:43.613356   43671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:28:43.621004   43671 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 00:28:43.621063   43671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:28:43.621106   43671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:28:43.630009   43671 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 00:28:43.630037   43671 start.go:495] detecting cgroup driver to use...
	I0914 00:28:43.630100   43671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:28:43.645753   43671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:28:43.660142   43671 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:28:43.660201   43671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:28:43.673974   43671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:28:43.687282   43671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:28:43.845933   43671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:28:43.995728   43671 docker.go:233] disabling docker service ...
	I0914 00:28:43.995815   43671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:28:44.011990   43671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:28:44.025215   43671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:28:44.165065   43671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:28:44.301464   43671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:28:44.314952   43671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:28:44.335755   43671 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 00:28:44.335815   43671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:28:44.335857   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.346352   43671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:28:44.346417   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.356690   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.366779   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.376947   43671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:28:44.387218   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.397320   43671 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.408117   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.418435   43671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:28:44.427853   43671 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 00:28:44.427924   43671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:28:44.437335   43671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:28:44.593998   43671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:28:44.796545   43671 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:28:44.796625   43671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:28:44.801401   43671 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 00:28:44.801426   43671 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 00:28:44.801433   43671 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0914 00:28:44.801440   43671 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 00:28:44.801446   43671 command_runner.go:130] > Access: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801452   43671 command_runner.go:130] > Modify: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801457   43671 command_runner.go:130] > Change: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801461   43671 command_runner.go:130] >  Birth: -
	I0914 00:28:44.801478   43671 start.go:563] Will wait 60s for crictl version
	I0914 00:28:44.801532   43671 ssh_runner.go:195] Run: which crictl
	I0914 00:28:44.805469   43671 command_runner.go:130] > /usr/bin/crictl
	I0914 00:28:44.805562   43671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:28:44.847205   43671 command_runner.go:130] > Version:  0.1.0
	I0914 00:28:44.847237   43671 command_runner.go:130] > RuntimeName:  cri-o
	I0914 00:28:44.847244   43671 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0914 00:28:44.847249   43671 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 00:28:44.847267   43671 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:28:44.847352   43671 ssh_runner.go:195] Run: crio --version
	I0914 00:28:44.876953   43671 command_runner.go:130] > crio version 1.29.1
	I0914 00:28:44.876979   43671 command_runner.go:130] > Version:        1.29.1
	I0914 00:28:44.876985   43671 command_runner.go:130] > GitCommit:      unknown
	I0914 00:28:44.876989   43671 command_runner.go:130] > GitCommitDate:  unknown
	I0914 00:28:44.876993   43671 command_runner.go:130] > GitTreeState:   clean
	I0914 00:28:44.876999   43671 command_runner.go:130] > BuildDate:      2024-09-13T21:54:05Z
	I0914 00:28:44.877003   43671 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 00:28:44.877006   43671 command_runner.go:130] > Compiler:       gc
	I0914 00:28:44.877011   43671 command_runner.go:130] > Platform:       linux/amd64
	I0914 00:28:44.877014   43671 command_runner.go:130] > Linkmode:       dynamic
	I0914 00:28:44.877018   43671 command_runner.go:130] > BuildTags:      
	I0914 00:28:44.877022   43671 command_runner.go:130] >   containers_image_ostree_stub
	I0914 00:28:44.877026   43671 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 00:28:44.877030   43671 command_runner.go:130] >   btrfs_noversion
	I0914 00:28:44.877035   43671 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 00:28:44.877039   43671 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 00:28:44.877043   43671 command_runner.go:130] >   seccomp
	I0914 00:28:44.877047   43671 command_runner.go:130] > LDFlags:          unknown
	I0914 00:28:44.877052   43671 command_runner.go:130] > SeccompEnabled:   true
	I0914 00:28:44.877067   43671 command_runner.go:130] > AppArmorEnabled:  false
	I0914 00:28:44.877129   43671 ssh_runner.go:195] Run: crio --version
	I0914 00:28:44.904807   43671 command_runner.go:130] > crio version 1.29.1
	I0914 00:28:44.904827   43671 command_runner.go:130] > Version:        1.29.1
	I0914 00:28:44.904833   43671 command_runner.go:130] > GitCommit:      unknown
	I0914 00:28:44.904837   43671 command_runner.go:130] > GitCommitDate:  unknown
	I0914 00:28:44.904841   43671 command_runner.go:130] > GitTreeState:   clean
	I0914 00:28:44.904860   43671 command_runner.go:130] > BuildDate:      2024-09-13T21:54:05Z
	I0914 00:28:44.904864   43671 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 00:28:44.904868   43671 command_runner.go:130] > Compiler:       gc
	I0914 00:28:44.904873   43671 command_runner.go:130] > Platform:       linux/amd64
	I0914 00:28:44.904876   43671 command_runner.go:130] > Linkmode:       dynamic
	I0914 00:28:44.904881   43671 command_runner.go:130] > BuildTags:      
	I0914 00:28:44.904885   43671 command_runner.go:130] >   containers_image_ostree_stub
	I0914 00:28:44.904889   43671 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 00:28:44.904893   43671 command_runner.go:130] >   btrfs_noversion
	I0914 00:28:44.904898   43671 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 00:28:44.904903   43671 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 00:28:44.904906   43671 command_runner.go:130] >   seccomp
	I0914 00:28:44.904911   43671 command_runner.go:130] > LDFlags:          unknown
	I0914 00:28:44.904917   43671 command_runner.go:130] > SeccompEnabled:   true
	I0914 00:28:44.904921   43671 command_runner.go:130] > AppArmorEnabled:  false
	I0914 00:28:44.907889   43671 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:28:44.909240   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:28:44.912019   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:44.912345   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:44.912379   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:44.912579   43671 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:28:44.916677   43671 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 00:28:44.916782   43671 kubeadm.go:883] updating cluster {Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:28:44.916950   43671 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:28:44.917011   43671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:28:44.955755   43671 command_runner.go:130] > {
	I0914 00:28:44.955782   43671 command_runner.go:130] >   "images": [
	I0914 00:28:44.955806   43671 command_runner.go:130] >     {
	I0914 00:28:44.955817   43671 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 00:28:44.955824   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.955832   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 00:28:44.955837   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955843   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.955863   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 00:28:44.955877   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 00:28:44.955890   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955900   43671 command_runner.go:130] >       "size": "87190579",
	I0914 00:28:44.955906   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.955913   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.955924   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.955931   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.955938   43671 command_runner.go:130] >     },
	I0914 00:28:44.955944   43671 command_runner.go:130] >     {
	I0914 00:28:44.955956   43671 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 00:28:44.955963   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.955972   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 00:28:44.955981   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955988   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956001   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 00:28:44.956014   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 00:28:44.956022   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956030   43671 command_runner.go:130] >       "size": "1363676",
	I0914 00:28:44.956039   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956051   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956060   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956067   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956075   43671 command_runner.go:130] >     },
	I0914 00:28:44.956082   43671 command_runner.go:130] >     {
	I0914 00:28:44.956092   43671 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 00:28:44.956101   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956111   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 00:28:44.956120   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956127   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956142   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 00:28:44.956158   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 00:28:44.956167   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956175   43671 command_runner.go:130] >       "size": "31470524",
	I0914 00:28:44.956182   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956198   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956208   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956217   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956223   43671 command_runner.go:130] >     },
	I0914 00:28:44.956231   43671 command_runner.go:130] >     {
	I0914 00:28:44.956243   43671 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 00:28:44.956251   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956261   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 00:28:44.956270   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956277   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956292   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 00:28:44.956313   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 00:28:44.956319   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956327   43671 command_runner.go:130] >       "size": "63273227",
	I0914 00:28:44.956336   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956346   43671 command_runner.go:130] >       "username": "nonroot",
	I0914 00:28:44.956356   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956365   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956372   43671 command_runner.go:130] >     },
	I0914 00:28:44.956379   43671 command_runner.go:130] >     {
	I0914 00:28:44.956392   43671 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 00:28:44.956399   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956410   43671 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 00:28:44.956419   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956426   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956441   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 00:28:44.956455   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 00:28:44.956464   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956471   43671 command_runner.go:130] >       "size": "149009664",
	I0914 00:28:44.956480   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956487   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956496   43671 command_runner.go:130] >       },
	I0914 00:28:44.956503   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956520   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956530   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956537   43671 command_runner.go:130] >     },
	I0914 00:28:44.956558   43671 command_runner.go:130] >     {
	I0914 00:28:44.956579   43671 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 00:28:44.956588   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956598   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 00:28:44.956606   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956613   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956628   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 00:28:44.956643   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 00:28:44.956652   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956659   43671 command_runner.go:130] >       "size": "95237600",
	I0914 00:28:44.956665   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956672   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956681   43671 command_runner.go:130] >       },
	I0914 00:28:44.956688   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956698   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956707   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956713   43671 command_runner.go:130] >     },
	I0914 00:28:44.956720   43671 command_runner.go:130] >     {
	I0914 00:28:44.956731   43671 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 00:28:44.956741   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956750   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 00:28:44.956758   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956765   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956779   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 00:28:44.956795   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 00:28:44.956804   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956811   43671 command_runner.go:130] >       "size": "89437508",
	I0914 00:28:44.956821   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956829   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956837   43671 command_runner.go:130] >       },
	I0914 00:28:44.956851   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956860   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956868   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956875   43671 command_runner.go:130] >     },
	I0914 00:28:44.956881   43671 command_runner.go:130] >     {
	I0914 00:28:44.956902   43671 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 00:28:44.956910   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956920   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 00:28:44.956928   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956936   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956968   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 00:28:44.956981   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 00:28:44.956987   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956992   43671 command_runner.go:130] >       "size": "92733849",
	I0914 00:28:44.956999   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.957008   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957013   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957018   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.957023   43671 command_runner.go:130] >     },
	I0914 00:28:44.957027   43671 command_runner.go:130] >     {
	I0914 00:28:44.957036   43671 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 00:28:44.957041   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.957048   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 00:28:44.957053   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957061   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.957072   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 00:28:44.957084   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 00:28:44.957091   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957098   43671 command_runner.go:130] >       "size": "68420934",
	I0914 00:28:44.957105   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.957111   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.957117   43671 command_runner.go:130] >       },
	I0914 00:28:44.957123   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957138   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957146   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.957151   43671 command_runner.go:130] >     },
	I0914 00:28:44.957158   43671 command_runner.go:130] >     {
	I0914 00:28:44.957172   43671 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 00:28:44.957181   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.957189   43671 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 00:28:44.957198   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957205   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.957219   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 00:28:44.957234   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 00:28:44.957242   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957250   43671 command_runner.go:130] >       "size": "742080",
	I0914 00:28:44.957259   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.957267   43671 command_runner.go:130] >         "value": "65535"
	I0914 00:28:44.957274   43671 command_runner.go:130] >       },
	I0914 00:28:44.957282   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957291   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957297   43671 command_runner.go:130] >       "pinned": true
	I0914 00:28:44.957304   43671 command_runner.go:130] >     }
	I0914 00:28:44.957311   43671 command_runner.go:130] >   ]
	I0914 00:28:44.957317   43671 command_runner.go:130] > }
	I0914 00:28:44.957495   43671 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:28:44.957508   43671 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:28:44.957571   43671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:28:44.990992   43671 command_runner.go:130] > {
	I0914 00:28:44.991017   43671 command_runner.go:130] >   "images": [
	I0914 00:28:44.991021   43671 command_runner.go:130] >     {
	I0914 00:28:44.991028   43671 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 00:28:44.991033   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991038   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 00:28:44.991043   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991047   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991059   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 00:28:44.991078   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 00:28:44.991087   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991094   43671 command_runner.go:130] >       "size": "87190579",
	I0914 00:28:44.991102   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991106   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991118   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991125   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991128   43671 command_runner.go:130] >     },
	I0914 00:28:44.991134   43671 command_runner.go:130] >     {
	I0914 00:28:44.991143   43671 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 00:28:44.991152   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991163   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 00:28:44.991173   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991179   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991194   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 00:28:44.991205   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 00:28:44.991211   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991216   43671 command_runner.go:130] >       "size": "1363676",
	I0914 00:28:44.991224   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991236   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991246   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991256   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991264   43671 command_runner.go:130] >     },
	I0914 00:28:44.991269   43671 command_runner.go:130] >     {
	I0914 00:28:44.991282   43671 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 00:28:44.991288   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991296   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 00:28:44.991303   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991308   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991325   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 00:28:44.991341   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 00:28:44.991349   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991359   43671 command_runner.go:130] >       "size": "31470524",
	I0914 00:28:44.991375   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991383   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991388   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991395   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991401   43671 command_runner.go:130] >     },
	I0914 00:28:44.991412   43671 command_runner.go:130] >     {
	I0914 00:28:44.991425   43671 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 00:28:44.991434   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991450   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 00:28:44.991459   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991468   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991478   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 00:28:44.991500   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 00:28:44.991510   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991520   43671 command_runner.go:130] >       "size": "63273227",
	I0914 00:28:44.991529   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991539   43671 command_runner.go:130] >       "username": "nonroot",
	I0914 00:28:44.991552   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991560   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991563   43671 command_runner.go:130] >     },
	I0914 00:28:44.991568   43671 command_runner.go:130] >     {
	I0914 00:28:44.991581   43671 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 00:28:44.991591   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991601   43671 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 00:28:44.991610   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991619   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991633   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 00:28:44.991644   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 00:28:44.991651   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991657   43671 command_runner.go:130] >       "size": "149009664",
	I0914 00:28:44.991666   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.991673   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.991682   43671 command_runner.go:130] >       },
	I0914 00:28:44.991696   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991707   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991716   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991724   43671 command_runner.go:130] >     },
	I0914 00:28:44.991730   43671 command_runner.go:130] >     {
	I0914 00:28:44.991737   43671 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 00:28:44.991745   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991756   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 00:28:44.991764   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991774   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991799   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 00:28:44.991815   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 00:28:44.991823   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991833   43671 command_runner.go:130] >       "size": "95237600",
	I0914 00:28:44.991842   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.991851   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.991858   43671 command_runner.go:130] >       },
	I0914 00:28:44.991863   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991871   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991880   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991889   43671 command_runner.go:130] >     },
	I0914 00:28:44.991897   43671 command_runner.go:130] >     {
	I0914 00:28:44.991907   43671 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 00:28:44.991917   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991928   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 00:28:44.991936   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991942   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991952   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 00:28:44.991972   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 00:28:44.991984   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991993   43671 command_runner.go:130] >       "size": "89437508",
	I0914 00:28:44.992002   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992011   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.992025   43671 command_runner.go:130] >       },
	I0914 00:28:44.992032   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992037   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992046   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992054   43671 command_runner.go:130] >     },
	I0914 00:28:44.992063   43671 command_runner.go:130] >     {
	I0914 00:28:44.992076   43671 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 00:28:44.992085   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992096   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 00:28:44.992103   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992112   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992138   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 00:28:44.992153   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 00:28:44.992161   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992168   43671 command_runner.go:130] >       "size": "92733849",
	I0914 00:28:44.992177   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.992186   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992196   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992203   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992206   43671 command_runner.go:130] >     },
	I0914 00:28:44.992212   43671 command_runner.go:130] >     {
	I0914 00:28:44.992224   43671 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 00:28:44.992233   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992244   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 00:28:44.992253   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992262   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992277   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 00:28:44.992288   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 00:28:44.992294   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992301   43671 command_runner.go:130] >       "size": "68420934",
	I0914 00:28:44.992309   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992318   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.992327   43671 command_runner.go:130] >       },
	I0914 00:28:44.992344   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992353   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992362   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992370   43671 command_runner.go:130] >     },
	I0914 00:28:44.992378   43671 command_runner.go:130] >     {
	I0914 00:28:44.992384   43671 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 00:28:44.992392   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992402   43671 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 00:28:44.992410   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992421   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992435   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 00:28:44.992460   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 00:28:44.992467   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992472   43671 command_runner.go:130] >       "size": "742080",
	I0914 00:28:44.992477   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992486   43671 command_runner.go:130] >         "value": "65535"
	I0914 00:28:44.992495   43671 command_runner.go:130] >       },
	I0914 00:28:44.992505   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992514   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992523   43671 command_runner.go:130] >       "pinned": true
	I0914 00:28:44.992531   43671 command_runner.go:130] >     }
	I0914 00:28:44.992545   43671 command_runner.go:130] >   ]
	I0914 00:28:44.992557   43671 command_runner.go:130] > }
	I0914 00:28:44.992724   43671 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:28:44.992736   43671 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:28:44.992745   43671 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.31.1 crio true true} ...
	I0914 00:28:44.992863   43671 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-209237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:28:44.992955   43671 ssh_runner.go:195] Run: crio config
	I0914 00:28:45.035177   43671 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 00:28:45.035205   43671 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 00:28:45.035215   43671 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 00:28:45.035220   43671 command_runner.go:130] > #
	I0914 00:28:45.035229   43671 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 00:28:45.035238   43671 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 00:28:45.035247   43671 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 00:28:45.035258   43671 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 00:28:45.035264   43671 command_runner.go:130] > # reload'.
	I0914 00:28:45.035275   43671 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 00:28:45.035288   43671 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 00:28:45.035299   43671 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 00:28:45.035312   43671 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 00:28:45.035319   43671 command_runner.go:130] > [crio]
	I0914 00:28:45.035329   43671 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 00:28:45.035347   43671 command_runner.go:130] > # containers images, in this directory.
	I0914 00:28:45.035519   43671 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 00:28:45.035569   43671 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 00:28:45.035636   43671 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 00:28:45.035653   43671 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0914 00:28:45.035734   43671 command_runner.go:130] > # imagestore = ""
	I0914 00:28:45.035759   43671 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 00:28:45.035771   43671 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 00:28:45.035875   43671 command_runner.go:130] > storage_driver = "overlay"
	I0914 00:28:45.035886   43671 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 00:28:45.035892   43671 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 00:28:45.035897   43671 command_runner.go:130] > storage_option = [
	I0914 00:28:45.036034   43671 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 00:28:45.036095   43671 command_runner.go:130] > ]
	I0914 00:28:45.036110   43671 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 00:28:45.036119   43671 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 00:28:45.036350   43671 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 00:28:45.036366   43671 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 00:28:45.036376   43671 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 00:28:45.036383   43671 command_runner.go:130] > # always happen on a node reboot
	I0914 00:28:45.036604   43671 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 00:28:45.036657   43671 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 00:28:45.036675   43671 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 00:28:45.036682   43671 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 00:28:45.036814   43671 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0914 00:28:45.036832   43671 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 00:28:45.036845   43671 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 00:28:45.037020   43671 command_runner.go:130] > # internal_wipe = true
	I0914 00:28:45.037038   43671 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0914 00:28:45.037047   43671 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0914 00:28:45.037255   43671 command_runner.go:130] > # internal_repair = false
	I0914 00:28:45.037266   43671 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 00:28:45.037273   43671 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 00:28:45.037279   43671 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 00:28:45.037498   43671 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 00:28:45.037509   43671 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 00:28:45.037519   43671 command_runner.go:130] > [crio.api]
	I0914 00:28:45.037524   43671 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 00:28:45.037781   43671 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 00:28:45.037801   43671 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 00:28:45.038013   43671 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 00:28:45.038030   43671 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 00:28:45.038038   43671 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 00:28:45.038247   43671 command_runner.go:130] > # stream_port = "0"
	I0914 00:28:45.038262   43671 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 00:28:45.038490   43671 command_runner.go:130] > # stream_enable_tls = false
	I0914 00:28:45.038514   43671 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 00:28:45.038733   43671 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 00:28:45.038750   43671 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 00:28:45.038760   43671 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 00:28:45.038766   43671 command_runner.go:130] > # minutes.
	I0914 00:28:45.038924   43671 command_runner.go:130] > # stream_tls_cert = ""
	I0914 00:28:45.038934   43671 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 00:28:45.038940   43671 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 00:28:45.039092   43671 command_runner.go:130] > # stream_tls_key = ""
	I0914 00:28:45.039106   43671 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 00:28:45.039116   43671 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 00:28:45.039144   43671 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 00:28:45.039283   43671 command_runner.go:130] > # stream_tls_ca = ""
	I0914 00:28:45.039295   43671 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 00:28:45.039400   43671 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 00:28:45.039420   43671 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 00:28:45.039534   43671 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 00:28:45.039549   43671 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 00:28:45.039567   43671 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 00:28:45.039576   43671 command_runner.go:130] > [crio.runtime]
	I0914 00:28:45.039585   43671 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 00:28:45.039596   43671 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 00:28:45.039602   43671 command_runner.go:130] > # "nofile=1024:2048"
	I0914 00:28:45.039613   43671 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 00:28:45.039644   43671 command_runner.go:130] > # default_ulimits = [
	I0914 00:28:45.039917   43671 command_runner.go:130] > # ]
	I0914 00:28:45.039983   43671 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 00:28:45.040094   43671 command_runner.go:130] > # no_pivot = false
	I0914 00:28:45.040108   43671 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 00:28:45.040117   43671 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 00:28:45.040345   43671 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 00:28:45.040363   43671 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 00:28:45.040369   43671 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 00:28:45.040377   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 00:28:45.040470   43671 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 00:28:45.040482   43671 command_runner.go:130] > # Cgroup setting for conmon
	I0914 00:28:45.040493   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 00:28:45.040596   43671 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 00:28:45.040613   43671 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 00:28:45.040621   43671 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 00:28:45.040632   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 00:28:45.040640   43671 command_runner.go:130] > conmon_env = [
	I0914 00:28:45.040752   43671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 00:28:45.040788   43671 command_runner.go:130] > ]
	I0914 00:28:45.040800   43671 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 00:28:45.040812   43671 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 00:28:45.040821   43671 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 00:28:45.040908   43671 command_runner.go:130] > # default_env = [
	I0914 00:28:45.041059   43671 command_runner.go:130] > # ]
	I0914 00:28:45.041076   43671 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 00:28:45.041088   43671 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0914 00:28:45.041300   43671 command_runner.go:130] > # selinux = false
	I0914 00:28:45.041323   43671 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 00:28:45.041333   43671 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 00:28:45.041346   43671 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 00:28:45.041486   43671 command_runner.go:130] > # seccomp_profile = ""
	I0914 00:28:45.041501   43671 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 00:28:45.041509   43671 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 00:28:45.041518   43671 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 00:28:45.041524   43671 command_runner.go:130] > # which might increase security.
	I0914 00:28:45.041532   43671 command_runner.go:130] > # This option is currently deprecated,
	I0914 00:28:45.041542   43671 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0914 00:28:45.041616   43671 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 00:28:45.041634   43671 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 00:28:45.041648   43671 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 00:28:45.041660   43671 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 00:28:45.041667   43671 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 00:28:45.041672   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.041883   43671 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 00:28:45.041896   43671 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 00:28:45.041903   43671 command_runner.go:130] > # the cgroup blockio controller.
	I0914 00:28:45.042086   43671 command_runner.go:130] > # blockio_config_file = ""
	I0914 00:28:45.042099   43671 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0914 00:28:45.042105   43671 command_runner.go:130] > # blockio parameters.
	I0914 00:28:45.042330   43671 command_runner.go:130] > # blockio_reload = false
	I0914 00:28:45.042344   43671 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 00:28:45.042350   43671 command_runner.go:130] > # irqbalance daemon.
	I0914 00:28:45.042566   43671 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 00:28:45.042579   43671 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0914 00:28:45.042589   43671 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0914 00:28:45.042600   43671 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0914 00:28:45.042828   43671 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0914 00:28:45.042841   43671 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 00:28:45.042850   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.043000   43671 command_runner.go:130] > # rdt_config_file = ""
	I0914 00:28:45.043011   43671 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 00:28:45.043128   43671 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 00:28:45.043171   43671 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 00:28:45.043318   43671 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 00:28:45.043341   43671 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 00:28:45.043352   43671 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 00:28:45.043362   43671 command_runner.go:130] > # will be added.
	I0914 00:28:45.043448   43671 command_runner.go:130] > # default_capabilities = [
	I0914 00:28:45.043855   43671 command_runner.go:130] > # 	"CHOWN",
	I0914 00:28:45.044036   43671 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 00:28:45.044370   43671 command_runner.go:130] > # 	"FSETID",
	I0914 00:28:45.044611   43671 command_runner.go:130] > # 	"FOWNER",
	I0914 00:28:45.045001   43671 command_runner.go:130] > # 	"SETGID",
	I0914 00:28:45.045150   43671 command_runner.go:130] > # 	"SETUID",
	I0914 00:28:45.045381   43671 command_runner.go:130] > # 	"SETPCAP",
	I0914 00:28:45.045580   43671 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 00:28:45.045814   43671 command_runner.go:130] > # 	"KILL",
	I0914 00:28:45.045923   43671 command_runner.go:130] > # ]
	I0914 00:28:45.045939   43671 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 00:28:45.045950   43671 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 00:28:45.046177   43671 command_runner.go:130] > # add_inheritable_capabilities = false
	I0914 00:28:45.046191   43671 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 00:28:45.046200   43671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 00:28:45.046206   43671 command_runner.go:130] > default_sysctls = [
	I0914 00:28:45.046267   43671 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0914 00:28:45.046344   43671 command_runner.go:130] > ]
	I0914 00:28:45.046355   43671 command_runner.go:130] > # List of devices on the host that a
	I0914 00:28:45.046366   43671 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 00:28:45.046463   43671 command_runner.go:130] > # allowed_devices = [
	I0914 00:28:45.046600   43671 command_runner.go:130] > # 	"/dev/fuse",
	I0914 00:28:45.046794   43671 command_runner.go:130] > # ]
	I0914 00:28:45.046802   43671 command_runner.go:130] > # List of additional devices. specified as
	I0914 00:28:45.046813   43671 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 00:28:45.046824   43671 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 00:28:45.046835   43671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 00:28:45.046872   43671 command_runner.go:130] > # additional_devices = [
	I0914 00:28:45.046995   43671 command_runner.go:130] > # ]
	I0914 00:28:45.047004   43671 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 00:28:45.047112   43671 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 00:28:45.047242   43671 command_runner.go:130] > # 	"/etc/cdi",
	I0914 00:28:45.047389   43671 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 00:28:45.048684   43671 command_runner.go:130] > # ]
	I0914 00:28:45.048700   43671 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 00:28:45.048710   43671 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 00:28:45.048715   43671 command_runner.go:130] > # Defaults to false.
	I0914 00:28:45.048733   43671 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 00:28:45.048744   43671 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 00:28:45.048754   43671 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 00:28:45.048763   43671 command_runner.go:130] > # hooks_dir = [
	I0914 00:28:45.048770   43671 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 00:28:45.048777   43671 command_runner.go:130] > # ]
	I0914 00:28:45.048788   43671 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 00:28:45.048801   43671 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 00:28:45.048810   43671 command_runner.go:130] > # its default mounts from the following two files:
	I0914 00:28:45.048818   43671 command_runner.go:130] > #
	I0914 00:28:45.048829   43671 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 00:28:45.048843   43671 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 00:28:45.048852   43671 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 00:28:45.048860   43671 command_runner.go:130] > #
	I0914 00:28:45.048872   43671 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 00:28:45.048885   43671 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 00:28:45.048899   43671 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 00:28:45.048910   43671 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 00:28:45.048918   43671 command_runner.go:130] > #
	I0914 00:28:45.048926   43671 command_runner.go:130] > # default_mounts_file = ""
	I0914 00:28:45.048939   43671 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 00:28:45.048951   43671 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 00:28:45.048960   43671 command_runner.go:130] > pids_limit = 1024
	I0914 00:28:45.048971   43671 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 00:28:45.048984   43671 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 00:28:45.048997   43671 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 00:28:45.049014   43671 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 00:28:45.049023   43671 command_runner.go:130] > # log_size_max = -1
	I0914 00:28:45.049036   43671 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0914 00:28:45.049044   43671 command_runner.go:130] > # log_to_journald = false
	I0914 00:28:45.049056   43671 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 00:28:45.049066   43671 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 00:28:45.049076   43671 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 00:28:45.049126   43671 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 00:28:45.049139   43671 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 00:28:45.049145   43671 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 00:28:45.049154   43671 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 00:28:45.049163   43671 command_runner.go:130] > # read_only = false
	I0914 00:28:45.049174   43671 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 00:28:45.049187   43671 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 00:28:45.049197   43671 command_runner.go:130] > # live configuration reload.
	I0914 00:28:45.049206   43671 command_runner.go:130] > # log_level = "info"
	I0914 00:28:45.049216   43671 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 00:28:45.049227   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.049237   43671 command_runner.go:130] > # log_filter = ""
	I0914 00:28:45.049247   43671 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 00:28:45.049265   43671 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 00:28:45.049274   43671 command_runner.go:130] > # separated by comma.
	I0914 00:28:45.049286   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049295   43671 command_runner.go:130] > # uid_mappings = ""
	I0914 00:28:45.049305   43671 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 00:28:45.049318   43671 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 00:28:45.049327   43671 command_runner.go:130] > # separated by comma.
	I0914 00:28:45.049341   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049350   43671 command_runner.go:130] > # gid_mappings = ""
	I0914 00:28:45.049361   43671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 00:28:45.049375   43671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 00:28:45.049390   43671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 00:28:45.049406   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049417   43671 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 00:28:45.049428   43671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 00:28:45.049441   43671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 00:28:45.049459   43671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 00:28:45.049474   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049484   43671 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 00:28:45.049497   43671 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 00:28:45.049516   43671 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 00:28:45.049529   43671 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 00:28:45.049538   43671 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 00:28:45.049548   43671 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 00:28:45.049560   43671 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 00:28:45.049571   43671 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 00:28:45.049582   43671 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 00:28:45.049592   43671 command_runner.go:130] > drop_infra_ctr = false
	I0914 00:28:45.049604   43671 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 00:28:45.049616   43671 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 00:28:45.049629   43671 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 00:28:45.049639   43671 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 00:28:45.049653   43671 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0914 00:28:45.049664   43671 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0914 00:28:45.049674   43671 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0914 00:28:45.049687   43671 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0914 00:28:45.049696   43671 command_runner.go:130] > # shared_cpuset = ""
	I0914 00:28:45.049708   43671 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 00:28:45.049719   43671 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 00:28:45.049729   43671 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 00:28:45.049744   43671 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 00:28:45.049756   43671 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 00:28:45.049767   43671 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0914 00:28:45.049777   43671 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0914 00:28:45.049786   43671 command_runner.go:130] > # enable_criu_support = false
	I0914 00:28:45.049795   43671 command_runner.go:130] > # Enable/disable the generation of the container,
	I0914 00:28:45.049810   43671 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0914 00:28:45.049821   43671 command_runner.go:130] > # enable_pod_events = false
	I0914 00:28:45.049834   43671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 00:28:45.049847   43671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 00:28:45.049858   43671 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0914 00:28:45.049866   43671 command_runner.go:130] > # default_runtime = "runc"
	I0914 00:28:45.049878   43671 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 00:28:45.049896   43671 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 00:28:45.049914   43671 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0914 00:28:45.049925   43671 command_runner.go:130] > # creation as a file is not desired either.
	I0914 00:28:45.049939   43671 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 00:28:45.049949   43671 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 00:28:45.049958   43671 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 00:28:45.049965   43671 command_runner.go:130] > # ]
	I0914 00:28:45.049977   43671 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 00:28:45.049990   43671 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 00:28:45.050002   43671 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0914 00:28:45.050013   43671 command_runner.go:130] > # Each entry in the table should follow the format:
	I0914 00:28:45.050021   43671 command_runner.go:130] > #
	I0914 00:28:45.050030   43671 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0914 00:28:45.050040   43671 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0914 00:28:45.050090   43671 command_runner.go:130] > # runtime_type = "oci"
	I0914 00:28:45.050099   43671 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0914 00:28:45.050107   43671 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0914 00:28:45.050115   43671 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0914 00:28:45.050125   43671 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0914 00:28:45.050133   43671 command_runner.go:130] > # monitor_env = []
	I0914 00:28:45.050143   43671 command_runner.go:130] > # privileged_without_host_devices = false
	I0914 00:28:45.050153   43671 command_runner.go:130] > # allowed_annotations = []
	I0914 00:28:45.050162   43671 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0914 00:28:45.050170   43671 command_runner.go:130] > # Where:
	I0914 00:28:45.050179   43671 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0914 00:28:45.050193   43671 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0914 00:28:45.050206   43671 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 00:28:45.050219   43671 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 00:28:45.050234   43671 command_runner.go:130] > #   in $PATH.
	I0914 00:28:45.050248   43671 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0914 00:28:45.050259   43671 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 00:28:45.050287   43671 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0914 00:28:45.050296   43671 command_runner.go:130] > #   state.
	I0914 00:28:45.050313   43671 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 00:28:45.050325   43671 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 00:28:45.050339   43671 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 00:28:45.050350   43671 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 00:28:45.050364   43671 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 00:28:45.050377   43671 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 00:28:45.050387   43671 command_runner.go:130] > #   The currently recognized values are:
	I0914 00:28:45.050397   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 00:28:45.050412   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 00:28:45.050425   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 00:28:45.050437   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 00:28:45.050452   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 00:28:45.050465   43671 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 00:28:45.050479   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0914 00:28:45.050489   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0914 00:28:45.050501   43671 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 00:28:45.050514   43671 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0914 00:28:45.050522   43671 command_runner.go:130] > #   deprecated option "conmon".
	I0914 00:28:45.050535   43671 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0914 00:28:45.050547   43671 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0914 00:28:45.050561   43671 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0914 00:28:45.050572   43671 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 00:28:45.050586   43671 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0914 00:28:45.050597   43671 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0914 00:28:45.050615   43671 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0914 00:28:45.050627   43671 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0914 00:28:45.050635   43671 command_runner.go:130] > #
	I0914 00:28:45.050643   43671 command_runner.go:130] > # Using the seccomp notifier feature:
	I0914 00:28:45.050649   43671 command_runner.go:130] > #
	I0914 00:28:45.050659   43671 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0914 00:28:45.050672   43671 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0914 00:28:45.050683   43671 command_runner.go:130] > #
	I0914 00:28:45.050694   43671 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0914 00:28:45.050714   43671 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0914 00:28:45.050722   43671 command_runner.go:130] > #
	I0914 00:28:45.050732   43671 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0914 00:28:45.050740   43671 command_runner.go:130] > # feature.
	I0914 00:28:45.050746   43671 command_runner.go:130] > #
	I0914 00:28:45.050762   43671 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0914 00:28:45.050774   43671 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0914 00:28:45.050788   43671 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0914 00:28:45.050801   43671 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0914 00:28:45.050814   43671 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0914 00:28:45.050821   43671 command_runner.go:130] > #
	I0914 00:28:45.050832   43671 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0914 00:28:45.050846   43671 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0914 00:28:45.050853   43671 command_runner.go:130] > #
	I0914 00:28:45.050864   43671 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0914 00:28:45.050876   43671 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0914 00:28:45.050884   43671 command_runner.go:130] > #
	I0914 00:28:45.050894   43671 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0914 00:28:45.050906   43671 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0914 00:28:45.050912   43671 command_runner.go:130] > # limitation.
	I0914 00:28:45.050923   43671 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 00:28:45.050933   43671 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 00:28:45.050940   43671 command_runner.go:130] > runtime_type = "oci"
	I0914 00:28:45.050950   43671 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 00:28:45.050959   43671 command_runner.go:130] > runtime_config_path = ""
	I0914 00:28:45.050968   43671 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0914 00:28:45.050977   43671 command_runner.go:130] > monitor_cgroup = "pod"
	I0914 00:28:45.050985   43671 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 00:28:45.050993   43671 command_runner.go:130] > monitor_env = [
	I0914 00:28:45.051002   43671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 00:28:45.051009   43671 command_runner.go:130] > ]
	I0914 00:28:45.051017   43671 command_runner.go:130] > privileged_without_host_devices = false
	I0914 00:28:45.051029   43671 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 00:28:45.051046   43671 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 00:28:45.051059   43671 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 00:28:45.051075   43671 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 00:28:45.051091   43671 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 00:28:45.051103   43671 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 00:28:45.051121   43671 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 00:28:45.051136   43671 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 00:28:45.051149   43671 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 00:28:45.051163   43671 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 00:28:45.051171   43671 command_runner.go:130] > # Example:
	I0914 00:28:45.051179   43671 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 00:28:45.051190   43671 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 00:28:45.051199   43671 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 00:28:45.051211   43671 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 00:28:45.051219   43671 command_runner.go:130] > # cpuset = 0
	I0914 00:28:45.051227   43671 command_runner.go:130] > # cpushares = "0-1"
	I0914 00:28:45.051235   43671 command_runner.go:130] > # Where:
	I0914 00:28:45.051243   43671 command_runner.go:130] > # The workload name is workload-type.
	I0914 00:28:45.051258   43671 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 00:28:45.051272   43671 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 00:28:45.051284   43671 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 00:28:45.051300   43671 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 00:28:45.051312   43671 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 00:28:45.051324   43671 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0914 00:28:45.051338   43671 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0914 00:28:45.051347   43671 command_runner.go:130] > # Default value is set to true
	I0914 00:28:45.051357   43671 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0914 00:28:45.051370   43671 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0914 00:28:45.051380   43671 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0914 00:28:45.051389   43671 command_runner.go:130] > # Default value is set to 'false'
	I0914 00:28:45.051399   43671 command_runner.go:130] > # disable_hostport_mapping = false
	I0914 00:28:45.051410   43671 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 00:28:45.051418   43671 command_runner.go:130] > #
	I0914 00:28:45.051433   43671 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 00:28:45.051445   43671 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 00:28:45.051457   43671 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 00:28:45.051466   43671 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 00:28:45.051472   43671 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 00:28:45.051479   43671 command_runner.go:130] > [crio.image]
	I0914 00:28:45.051494   43671 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 00:28:45.051501   43671 command_runner.go:130] > # default_transport = "docker://"
	I0914 00:28:45.051513   43671 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 00:28:45.051523   43671 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 00:28:45.051529   43671 command_runner.go:130] > # global_auth_file = ""
	I0914 00:28:45.051537   43671 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 00:28:45.051546   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.051554   43671 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0914 00:28:45.051564   43671 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 00:28:45.051573   43671 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 00:28:45.051581   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.051588   43671 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 00:28:45.051598   43671 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 00:28:45.051607   43671 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 00:28:45.051617   43671 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 00:28:45.051625   43671 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 00:28:45.051633   43671 command_runner.go:130] > # pause_command = "/pause"
	I0914 00:28:45.051642   43671 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0914 00:28:45.051651   43671 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0914 00:28:45.051660   43671 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0914 00:28:45.051670   43671 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0914 00:28:45.051681   43671 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0914 00:28:45.051694   43671 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0914 00:28:45.051704   43671 command_runner.go:130] > # pinned_images = [
	I0914 00:28:45.051710   43671 command_runner.go:130] > # ]
	I0914 00:28:45.051721   43671 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 00:28:45.051734   43671 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 00:28:45.051758   43671 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 00:28:45.051771   43671 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 00:28:45.051798   43671 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 00:28:45.051808   43671 command_runner.go:130] > # signature_policy = ""
	I0914 00:28:45.051818   43671 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0914 00:28:45.051834   43671 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0914 00:28:45.051847   43671 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0914 00:28:45.051860   43671 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0914 00:28:45.051873   43671 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0914 00:28:45.051889   43671 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0914 00:28:45.051904   43671 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 00:28:45.051917   43671 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 00:28:45.051926   43671 command_runner.go:130] > # changing them here.
	I0914 00:28:45.051935   43671 command_runner.go:130] > # insecure_registries = [
	I0914 00:28:45.051942   43671 command_runner.go:130] > # ]
	I0914 00:28:45.051954   43671 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 00:28:45.051965   43671 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 00:28:45.051973   43671 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 00:28:45.051988   43671 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 00:28:45.051996   43671 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 00:28:45.052008   43671 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 00:28:45.052016   43671 command_runner.go:130] > # CNI plugins.
	I0914 00:28:45.052022   43671 command_runner.go:130] > [crio.network]
	I0914 00:28:45.052035   43671 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 00:28:45.052047   43671 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 00:28:45.052057   43671 command_runner.go:130] > # cni_default_network = ""
	I0914 00:28:45.052068   43671 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 00:28:45.052077   43671 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 00:28:45.052088   43671 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 00:28:45.052097   43671 command_runner.go:130] > # plugin_dirs = [
	I0914 00:28:45.052106   43671 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 00:28:45.052113   43671 command_runner.go:130] > # ]
	I0914 00:28:45.052123   43671 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 00:28:45.052141   43671 command_runner.go:130] > [crio.metrics]
	I0914 00:28:45.052151   43671 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 00:28:45.052158   43671 command_runner.go:130] > enable_metrics = true
	I0914 00:28:45.052169   43671 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 00:28:45.052178   43671 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 00:28:45.052191   43671 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 00:28:45.052204   43671 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 00:28:45.052216   43671 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 00:28:45.052225   43671 command_runner.go:130] > # metrics_collectors = [
	I0914 00:28:45.052233   43671 command_runner.go:130] > # 	"operations",
	I0914 00:28:45.052243   43671 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 00:28:45.052251   43671 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 00:28:45.052265   43671 command_runner.go:130] > # 	"operations_errors",
	I0914 00:28:45.052274   43671 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 00:28:45.052282   43671 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 00:28:45.052292   43671 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 00:28:45.052301   43671 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 00:28:45.052309   43671 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 00:28:45.052322   43671 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 00:28:45.052332   43671 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 00:28:45.052341   43671 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0914 00:28:45.052353   43671 command_runner.go:130] > # 	"containers_oom_total",
	I0914 00:28:45.052363   43671 command_runner.go:130] > # 	"containers_oom",
	I0914 00:28:45.052373   43671 command_runner.go:130] > # 	"processes_defunct",
	I0914 00:28:45.052380   43671 command_runner.go:130] > # 	"operations_total",
	I0914 00:28:45.052389   43671 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 00:28:45.052397   43671 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 00:28:45.052407   43671 command_runner.go:130] > # 	"operations_errors_total",
	I0914 00:28:45.052418   43671 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 00:28:45.052428   43671 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 00:28:45.052437   43671 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 00:28:45.052446   43671 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 00:28:45.052454   43671 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 00:28:45.052468   43671 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 00:28:45.052478   43671 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0914 00:28:45.052488   43671 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0914 00:28:45.052494   43671 command_runner.go:130] > # ]
	I0914 00:28:45.052505   43671 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 00:28:45.052514   43671 command_runner.go:130] > # metrics_port = 9090
	I0914 00:28:45.052523   43671 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 00:28:45.052533   43671 command_runner.go:130] > # metrics_socket = ""
	I0914 00:28:45.052542   43671 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 00:28:45.052555   43671 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 00:28:45.052569   43671 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 00:28:45.052579   43671 command_runner.go:130] > # certificate on any modification event.
	I0914 00:28:45.052587   43671 command_runner.go:130] > # metrics_cert = ""
	I0914 00:28:45.052598   43671 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 00:28:45.052609   43671 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 00:28:45.052619   43671 command_runner.go:130] > # metrics_key = ""
	I0914 00:28:45.052630   43671 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 00:28:45.052639   43671 command_runner.go:130] > [crio.tracing]
	I0914 00:28:45.052649   43671 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 00:28:45.052658   43671 command_runner.go:130] > # enable_tracing = false
	I0914 00:28:45.052668   43671 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 00:28:45.052677   43671 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 00:28:45.052688   43671 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0914 00:28:45.052699   43671 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 00:28:45.052709   43671 command_runner.go:130] > # CRI-O NRI configuration.
	I0914 00:28:45.052715   43671 command_runner.go:130] > [crio.nri]
	I0914 00:28:45.052726   43671 command_runner.go:130] > # Globally enable or disable NRI.
	I0914 00:28:45.052735   43671 command_runner.go:130] > # enable_nri = false
	I0914 00:28:45.052743   43671 command_runner.go:130] > # NRI socket to listen on.
	I0914 00:28:45.052753   43671 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0914 00:28:45.052762   43671 command_runner.go:130] > # NRI plugin directory to use.
	I0914 00:28:45.052771   43671 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0914 00:28:45.052788   43671 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0914 00:28:45.052805   43671 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0914 00:28:45.052818   43671 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0914 00:28:45.052828   43671 command_runner.go:130] > # nri_disable_connections = false
	I0914 00:28:45.052840   43671 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0914 00:28:45.052848   43671 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0914 00:28:45.052860   43671 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0914 00:28:45.052871   43671 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0914 00:28:45.052884   43671 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 00:28:45.052892   43671 command_runner.go:130] > [crio.stats]
	I0914 00:28:45.052902   43671 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 00:28:45.052914   43671 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 00:28:45.052924   43671 command_runner.go:130] > # stats_collection_period = 0
	I0914 00:28:45.052971   43671 command_runner.go:130] ! time="2024-09-14 00:28:45.003150630Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0914 00:28:45.052990   43671 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 00:28:45.053084   43671 cni.go:84] Creating CNI manager for ""
	I0914 00:28:45.053097   43671 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 00:28:45.053110   43671 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:28:45.053136   43671 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-209237 NodeName:multinode-209237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:28:45.053313   43671 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-209237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:28:45.053393   43671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:28:45.064326   43671 command_runner.go:130] > kubeadm
	I0914 00:28:45.064354   43671 command_runner.go:130] > kubectl
	I0914 00:28:45.064360   43671 command_runner.go:130] > kubelet
	I0914 00:28:45.064426   43671 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:28:45.064509   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:28:45.074839   43671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 00:28:45.092228   43671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:28:45.109613   43671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0914 00:28:45.125964   43671 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0914 00:28:45.129806   43671 command_runner.go:130] > 192.168.39.214	control-plane.minikube.internal
	I0914 00:28:45.129875   43671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:28:45.276541   43671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:28:45.291617   43671 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237 for IP: 192.168.39.214
	I0914 00:28:45.291644   43671 certs.go:194] generating shared ca certs ...
	I0914 00:28:45.291665   43671 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:28:45.291838   43671 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:28:45.291901   43671 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:28:45.291915   43671 certs.go:256] generating profile certs ...
	I0914 00:28:45.292013   43671 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/client.key
	I0914 00:28:45.292084   43671 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key.25f22b36
	I0914 00:28:45.292145   43671 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key
	I0914 00:28:45.292160   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 00:28:45.292190   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 00:28:45.292208   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 00:28:45.292226   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 00:28:45.292244   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 00:28:45.292263   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 00:28:45.292282   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 00:28:45.292307   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 00:28:45.292370   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:28:45.292411   43671 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:28:45.292424   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:28:45.292468   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:28:45.292524   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:28:45.292558   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:28:45.292615   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:28:45.292658   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.292677   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.292696   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.294635   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:28:45.318825   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:28:45.342423   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:28:45.365761   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:28:45.388625   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 00:28:45.411430   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:28:45.434482   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:28:45.457275   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 00:28:45.480224   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:28:45.502320   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:28:45.527973   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:28:45.551905   43671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:28:45.567672   43671 ssh_runner.go:195] Run: openssl version
	I0914 00:28:45.573101   43671 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0914 00:28:45.573230   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:28:45.583623   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587665   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587704   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587748   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.592866   43671 command_runner.go:130] > b5213941
	I0914 00:28:45.593012   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:28:45.601877   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:28:45.612222   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616741   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616768   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616805   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.622294   43671 command_runner.go:130] > 51391683
	I0914 00:28:45.622347   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:28:45.632068   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:28:45.642541   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647291   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647324   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647377   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.653359   43671 command_runner.go:130] > 3ec20f2e
	I0914 00:28:45.653442   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:28:45.662934   43671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:28:45.667253   43671 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:28:45.667292   43671 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0914 00:28:45.667300   43671 command_runner.go:130] > Device: 253,1	Inode: 4195880     Links: 1
	I0914 00:28:45.667309   43671 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 00:28:45.667318   43671 command_runner.go:130] > Access: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667326   43671 command_runner.go:130] > Modify: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667333   43671 command_runner.go:130] > Change: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667341   43671 command_runner.go:130] >  Birth: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667420   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:28:45.672888   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.673064   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:28:45.678501   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.678586   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:28:45.683817   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.683931   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:28:45.689209   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.689406   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:28:45.694698   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.694767   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:28:45.700073   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.700154   43671 kubeadm.go:392] StartCluster: {Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:28:45.700256   43671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:28:45.700320   43671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:28:45.735141   43671 command_runner.go:130] > 317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7
	I0914 00:28:45.735170   43671 command_runner.go:130] > 7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c
	I0914 00:28:45.735179   43671 command_runner.go:130] > 8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca
	I0914 00:28:45.735190   43671 command_runner.go:130] > f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6
	I0914 00:28:45.735197   43671 command_runner.go:130] > 374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0
	I0914 00:28:45.735206   43671 command_runner.go:130] > 03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a
	I0914 00:28:45.735216   43671 command_runner.go:130] > cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0
	I0914 00:28:45.735227   43671 command_runner.go:130] > 84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2
	I0914 00:28:45.738589   43671 cri.go:89] found id: "317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7"
	I0914 00:28:45.738610   43671 cri.go:89] found id: "7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c"
	I0914 00:28:45.738614   43671 cri.go:89] found id: "8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca"
	I0914 00:28:45.738617   43671 cri.go:89] found id: "f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6"
	I0914 00:28:45.738619   43671 cri.go:89] found id: "374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0"
	I0914 00:28:45.738622   43671 cri.go:89] found id: "03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a"
	I0914 00:28:45.738625   43671 cri.go:89] found id: "cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0"
	I0914 00:28:45.738627   43671 cri.go:89] found id: "84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2"
	I0914 00:28:45.738629   43671 cri.go:89] found id: ""
	I0914 00:28:45.738671   43671 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.022700997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee1dcc9d-7b12-4925-821c-f88973867195 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.023864595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df204bc2-4c00-4e34-ad6e-2ecbda3da94b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.024280626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273832024258007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df204bc2-4c00-4e34-ad6e-2ecbda3da94b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.024745638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5e9c6f5-83a9-441a-b794-dd6539f4bcef name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.024851329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5e9c6f5-83a9-441a-b794-dd6539f4bcef name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.025209792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5e9c6f5-83a9-441a-b794-dd6539f4bcef name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.065535699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1f4eebe-f453-40da-aa50-bd030d919abe name=/runtime.v1.RuntimeService/Version
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.065626605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1f4eebe-f453-40da-aa50-bd030d919abe name=/runtime.v1.RuntimeService/Version
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.066833449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fad815e-64de-4618-8829-10959df54f88 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.067336380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273832067313134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fad815e-64de-4618-8829-10959df54f88 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.068346277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=180c0e66-2f71-4081-affd-901f7f4abc80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.068419175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=180c0e66-2f71-4081-affd-901f7f4abc80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.068850069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=180c0e66-2f71-4081-affd-901f7f4abc80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.074063070Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=61412788-ee2b-4791-a4bb-faeb9da8b314 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.074426443Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-956wv,Uid:d188d3b8-bd67-4381-be40-70ea7e88d809,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273766843374728,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704711630Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-svdnx,Uid:ff82006d-cb22-4180-9740-454f158c2f25,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726273733136087225,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704707656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&PodSandboxMetadata{Name:kindnet-q25jz,Uid:0b1e5199-8d9b-449c-868c-4c2ae8215936,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733064727958,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704710467Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53dc5e6a-ac47-4181-9a30-96faeff841b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733062959759,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T00:28:52.704706263Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&PodSandboxMetadata{Name:kube-proxy-b9vxj,Uid:5485377f-3371-44f1-9d25-d4fc9c87e7e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733061034315,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704704769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-209237,Uid:82a0f57f68d2ea01e945d218ac798055,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728263512563,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82a0f57f68d2ea01e945d218ac798055,kubernetes.io/config.seen: 2024-09-14T00:28:47.722098453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&PodSandboxMetadat
a{Name:etcd-multinode-209237,Uid:c92b5a4000ae755fa3f55ca0633d7626,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728250050732,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: c92b5a4000ae755fa3f55ca0633d7626,kubernetes.io/config.seen: 2024-09-14T00:28:47.722103988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-209237,Uid:c35437f7ada12fed26bb13b8e7897ac7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728246048151,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.214:8443,kubernetes.io/config.hash: c35437f7ada12fed26bb13b8e7897ac7,kubernetes.io/config.seen: 2024-09-14T00:28:47.722105664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-209237,Uid:1112e0e9df8e98ef0757c4dbc4c653f9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728245399820,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 1112e0e9df8e98ef0757c4dbc4c653f9,kubernetes.io/config.seen: 2024-09-14T00:28:47.722101945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-956wv,Uid:d188d3b8-bd67-4381-be40-70ea7e88d809,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273407156844213,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:23:26.838708888Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53dc5e6a-ac47-4181-9a30-96faeff841b7,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1726273352283652717,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T00:22:31.970081393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-svdnx,Uid:ff82006d-cb22-4180-9740-454f158c2f25,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273352279234941,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:31.963128483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&PodSandboxMetadata{Name:kindnet-q25jz,Uid:0b1e5199-8d9b-449c-868c-4c2ae8215936,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273340043879986,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:18.832390621Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&PodSandboxMetadata{Name:kube-proxy-b9vxj,Uid:5485377f-3371-44f1-9d25-d4fc9c87e7e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273340042926029,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:18.837436333Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-209237,Uid:c92b5a4000ae755fa3f55ca0633d7626,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329123996468,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: c92b5a4000ae755fa3f55ca0633d7626,kubernetes.io/config.seen: 2024-09-14T00:22:08.644841068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e053
9c553a6d05e5c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-209237,Uid:1112e0e9df8e98ef0757c4dbc4c653f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329122911118,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1112e0e9df8e98ef0757c4dbc4c653f9,kubernetes.io/config.seen: 2024-09-14T00:22:08.644848274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-209237,Uid:82a0f57f68d2ea01e945d218ac798055,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329122185059,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82a0f57f68d2ea01e945d218ac798055,kubernetes.io/config.seen: 2024-09-14T00:22:08.644847187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-209237,Uid:c35437f7ada12fed26bb13b8e7897ac7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329098201767,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
: 192.168.39.214:8443,kubernetes.io/config.hash: c35437f7ada12fed26bb13b8e7897ac7,kubernetes.io/config.seen: 2024-09-14T00:22:08.644845619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=61412788-ee2b-4791-a4bb-faeb9da8b314 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.075079870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c834374f-b190-4ffe-9ded-37134bbb413d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.075154224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c834374f-b190-4ffe-9ded-37134bbb413d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.075616399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c834374f-b190-4ffe-9ded-37134bbb413d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.108730648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=648b45c5-78c1-4a58-836e-f6f30d8b06df name=/runtime.v1.RuntimeService/Version
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.108851650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=648b45c5-78c1-4a58-836e-f6f30d8b06df name=/runtime.v1.RuntimeService/Version
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.109682842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1cd4083-4537-4257-8c83-403b8477ae3f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.110138434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273832110114664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1cd4083-4537-4257-8c83-403b8477ae3f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.110575229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ff40b41-5ce9-4a97-b3ca-53eee695b34b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.110627062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ff40b41-5ce9-4a97-b3ca-53eee695b34b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:30:32 multinode-209237 crio[2695]: time="2024-09-14 00:30:32.111010313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ff40b41-5ce9-4a97-b3ca-53eee695b34b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eab058a21cc8f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   7a9ca3cb79a28       busybox-7dff88458-956wv
	6031bffda9a2f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   d4da591ab4668       kindnet-q25jz
	cb952776322fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   137000f4fa16b       coredns-7c65d6cfc9-svdnx
	c693ef0e7b777       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   189fda91fd94b       kube-proxy-b9vxj
	58ad99ff59e6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   19b7ab54ded9a       storage-provisioner
	0ba391e8a5aad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   183e8e40e69dc       kube-controller-manager-multinode-209237
	b72ce42c87cea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   f0975fe907998       etcd-multinode-209237
	81cdf784a468e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   78472bed701f1       kube-apiserver-multinode-209237
	b91527355f6ed       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   7550692aa5380       kube-scheduler-multinode-209237
	67b761e51b938       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   ff0de3b34b96d       busybox-7dff88458-956wv
	317b9e570ba23       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   74cdd0432b856       coredns-7c65d6cfc9-svdnx
	7b97935c57b90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   6c69c0c0a87d5       storage-provisioner
	8e2b4c92c6869       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   16f47f89e20eb       kindnet-q25jz
	f8fe88c904818       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   5902218c21491       kube-proxy-b9vxj
	374870699ff0a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   ea20f7e024487       kube-controller-manager-multinode-209237
	03bcf16a526d9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   1f7cefd0b83fb       etcd-multinode-209237
	cc34260f15554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   720c62f82c6bc       kube-scheduler-multinode-209237
	84997aaf1d8b5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   5bce7f8fcba87       kube-apiserver-multinode-209237
	
	
	==> coredns [317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7] <==
	[INFO] 10.244.1.2:54390 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640334s
	[INFO] 10.244.1.2:38994 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099861s
	[INFO] 10.244.1.2:58586 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093077s
	[INFO] 10.244.1.2:49292 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236217s
	[INFO] 10.244.1.2:42846 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104296s
	[INFO] 10.244.1.2:54669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139418s
	[INFO] 10.244.1.2:57229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009637s
	[INFO] 10.244.0.3:53187 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097746s
	[INFO] 10.244.0.3:43993 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059835s
	[INFO] 10.244.0.3:47338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049723s
	[INFO] 10.244.0.3:55121 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043808s
	[INFO] 10.244.1.2:44308 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129689s
	[INFO] 10.244.1.2:51773 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115579s
	[INFO] 10.244.1.2:59177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081764s
	[INFO] 10.244.1.2:58712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126592s
	[INFO] 10.244.0.3:45372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181947s
	[INFO] 10.244.0.3:33077 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175651s
	[INFO] 10.244.0.3:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153705s
	[INFO] 10.244.0.3:50590 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101787s
	[INFO] 10.244.1.2:45483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162049s
	[INFO] 10.244.1.2:40517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143862s
	[INFO] 10.244.1.2:43378 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084981s
	[INFO] 10.244.1.2:37454 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077679s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32948 - 37710 "HINFO IN 5518691668570056764.2722245426264500041. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014399104s
	
	
	==> describe nodes <==
	Name:               multinode-209237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=multinode-209237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_22_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209237
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    multinode-209237
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64f2ce3c14ee4a9f95871f538c56db8d
	  System UUID:                64f2ce3c-14ee-4a9f-9587-1f538c56db8d
	  Boot ID:                    16cc41bb-1ddb-422a-b746-d57940c85259
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-956wv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 coredns-7c65d6cfc9-svdnx                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m13s
	  kube-system                 etcd-multinode-209237                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m18s
	  kube-system                 kindnet-q25jz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m14s
	  kube-system                 kube-apiserver-multinode-209237             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-multinode-209237    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-b9vxj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-scheduler-multinode-209237             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m11s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 8m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m24s (x8 over 8m24s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s (x8 over 8m24s)  kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m24s (x7 over 8m24s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m18s (x2 over 8m18s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m18s (x2 over 8m18s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s (x2 over 8m18s)  kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m14s                  node-controller  Node multinode-209237 event: Registered Node multinode-209237 in Controller
	  Normal  NodeReady                8m1s                   kubelet          Node multinode-209237 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)    kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)    kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)    kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-209237 event: Registered Node multinode-209237 in Controller
	
	
	Name:               multinode-209237-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209237-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=multinode-209237
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T00_29_34_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:29:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209237-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:30:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:29:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:29:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:29:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:29:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-209237-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbe3ca9578d2450f8368ddf16293a2eb
	  System UUID:                cbe3ca95-78d2-450f-8368-ddf16293a2eb
	  Boot ID:                    7235c0df-e2b7-4425-ad33-af70beb280f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzw2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-xmgm2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m29s
	  kube-system                 kube-proxy-pddlw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m22s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet     Node multinode-209237-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet     Node multinode-209237-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet     Node multinode-209237-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-209237-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-209237-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-209237-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-209237-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-209237-m02 status is now: NodeReady
	
	
	Name:               multinode-209237-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209237-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=multinode-209237
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T00_30_11_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:30:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209237-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:30:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:30:29 +0000   Sat, 14 Sep 2024 00:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:30:29 +0000   Sat, 14 Sep 2024 00:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:30:29 +0000   Sat, 14 Sep 2024 00:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:30:29 +0000   Sat, 14 Sep 2024 00:30:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-209237-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a0b55e0beb9470094c786d95ead3a4c
	  System UUID:                2a0b55e0-beb9-4700-94c7-86d95ead3a4c
	  Boot ID:                    e15d1805-3833-4679-a36c-cbb02632749c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6kdl5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-proxy-96zdq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node multinode-209237-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-209237-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-209237-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-209237-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x2 over 22s)      kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 22s)      kubelet          Node multinode-209237-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 22s)      kubelet          Node multinode-209237-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-209237-m03 event: Registered Node multinode-209237-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-209237-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061865] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.178940] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.117094] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.275249] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Sep14 00:22] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.079969] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.060877] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002313] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.088513] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.100109] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.136910] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.969449] kauditd_printk_skb: 60 callbacks suppressed
	[Sep14 00:23] kauditd_printk_skb: 12 callbacks suppressed
	[Sep14 00:28] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.158472] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.170611] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.141869] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.280593] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.685211] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +2.341461] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +5.641097] kauditd_printk_skb: 184 callbacks suppressed
	[Sep14 00:29] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.084386] systemd-fstab-generator[3744]: Ignoring "noauto" option for root device
	[ +17.919526] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a] <==
	{"level":"info","ts":"2024-09-14T00:23:08.556874Z","caller":"traceutil/trace.go:171","msg":"trace[905844528] range","detail":"{range_begin:/registry/minions/multinode-209237-m02; range_end:; response_count:1; response_revision:513; }","duration":"284.418666ms","start":"2024-09-14T00:23:08.272446Z","end":"2024-09-14T00:23:08.556865Z","steps":["trace[905844528] 'agreement among raft nodes before linearized reading'  (duration: 284.27404ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.557004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.067912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:08.557067Z","caller":"traceutil/trace.go:171","msg":"trace[770311732] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:513; }","duration":"305.127353ms","start":"2024-09-14T00:23:08.251928Z","end":"2024-09-14T00:23:08.557056Z","steps":["trace[770311732] 'agreement among raft nodes before linearized reading'  (duration: 305.053198ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.557108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:23:08.251893Z","time spent":"305.204326ms","remote":"127.0.0.1:50052","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-14T00:23:08.557514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:23:08.220049Z","time spent":"336.75953ms","remote":"127.0.0.1:50280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2878,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-209237-m02\" mod_revision:504 > success:<request_put:<key:\"/registry/minions/multinode-209237-m02\" value_size:2832 >> failure:<request_range:<key:\"/registry/minions/multinode-209237-m02\" > >"}
	{"level":"info","ts":"2024-09-14T00:23:08.878236Z","caller":"traceutil/trace.go:171","msg":"trace[2033956248] linearizableReadLoop","detail":"{readStateIndex:535; appliedIndex:534; }","duration":"153.029492ms","start":"2024-09-14T00:23:08.725184Z","end":"2024-09-14T00:23:08.878214Z","steps":["trace[2033956248] 'read index received'  (duration: 86.753327ms)","trace[2033956248] 'applied index is now lower than readState.Index'  (duration: 66.275121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:23:08.878406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.195811ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:08.878459Z","caller":"traceutil/trace.go:171","msg":"trace[1875454931] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:513; }","duration":"153.26437ms","start":"2024-09-14T00:23:08.725181Z","end":"2024-09-14T00:23:08.878445Z","steps":["trace[1875454931] 'agreement among raft nodes before linearized reading'  (duration: 153.167522ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.878559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.045163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-209237-m02\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-14T00:23:08.878614Z","caller":"traceutil/trace.go:171","msg":"trace[1656493857] range","detail":"{range_begin:/registry/minions/multinode-209237-m02; range_end:; response_count:1; response_revision:513; }","duration":"106.105177ms","start":"2024-09-14T00:23:08.772499Z","end":"2024-09-14T00:23:08.878604Z","steps":["trace[1656493857] 'agreement among raft nodes before linearized reading'  (duration: 106.008888ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:57.639574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.852738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6697857521825737153 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-209237-m03.17f4f48b8c226a41\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-209237-m03.17f4f48b8c226a41\" value_size:642 lease:6697857521825736767 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-14T00:23:57.639679Z","caller":"traceutil/trace.go:171","msg":"trace[1071833621] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"110.242348ms","start":"2024-09-14T00:23:57.529426Z","end":"2024-09-14T00:23:57.639669Z","steps":["trace[1071833621] 'read index received'  (duration: 25.954µs)","trace[1071833621] 'applied index is now lower than readState.Index'  (duration: 110.21557ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T00:23:57.639738Z","caller":"traceutil/trace.go:171","msg":"trace[867888670] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"232.448573ms","start":"2024-09-14T00:23:57.407284Z","end":"2024-09-14T00:23:57.639733Z","steps":["trace[867888670] 'process raft request'  (duration: 74.105213ms)","trace[867888670] 'compare'  (duration: 157.748687ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:23:57.640067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.643142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-209237-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:57.640151Z","caller":"traceutil/trace.go:171","msg":"trace[1037065280] range","detail":"{range_begin:/registry/minions/multinode-209237-m03; range_end:; response_count:0; response_revision:616; }","duration":"110.733954ms","start":"2024-09-14T00:23:57.529408Z","end":"2024-09-14T00:23:57.640142Z","steps":["trace[1037065280] 'agreement among raft nodes before linearized reading'  (duration: 110.605078ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:27:12.424373Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T00:27:12.424494Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-209237","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"]}
	{"level":"warn","ts":"2024-09-14T00:27:12.424624Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.424712Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.509046Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.214:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.509139Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.214:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:27:12.509231Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9910392473c15cf3","current-leader-member-id":"9910392473c15cf3"}
	{"level":"info","ts":"2024-09-14T00:27:12.511476Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:27:12.511615Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:27:12.511646Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-209237","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"]}
	
	
	==> etcd [b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd] <==
	{"level":"info","ts":"2024-09-14T00:28:48.889540Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","added-peer-id":"9910392473c15cf3","added-peer-peer-urls":["https://192.168.39.214:2380"]}
	{"level":"info","ts":"2024-09-14T00:28:48.889668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:28:48.889721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:28:48.897864Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:48.899482Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:28:48.899695Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9910392473c15cf3","initial-advertise-peer-urls":["https://192.168.39.214:2380"],"listen-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:28:48.899731Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:28:48.909273Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:28:48.913809Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:28:50.715391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgPreVoteResp from 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgVoteResp from 9910392473c15cf3 at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9910392473c15cf3 elected leader 9910392473c15cf3 at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.721190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:28:50.721371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:28:50.721212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9910392473c15cf3","local-member-attributes":"{Name:multinode-209237 ClientURLs:[https://192.168.39.214:2379]}","request-path":"/0/members/9910392473c15cf3/attributes","cluster-id":"437e955a662fe33","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:28:50.722244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:28:50.722873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:28:50.722979Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:50.723436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:50.723831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:28:50.724251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.214:2379"}
	
	
	==> kernel <==
	 00:30:32 up 8 min,  0 users,  load average: 0.39, 0.25, 0.13
	Linux multinode-209237 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d] <==
	I0914 00:29:44.418496       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:29:54.419035       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:29:54.419075       1 main.go:299] handling current node
	I0914 00:29:54.419089       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:29:54.419094       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:29:54.419228       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:29:54.419251       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:30:04.420726       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:30:04.420803       1 main.go:299] handling current node
	I0914 00:30:04.420817       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:30:04.420823       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:30:04.420977       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:30:04.420994       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:30:14.420236       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:30:14.420331       1 main.go:299] handling current node
	I0914 00:30:14.420353       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:30:14.420359       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:30:14.420481       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:30:14.420499       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.2.0/24] 
	I0914 00:30:24.420088       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:30:24.420168       1 main.go:299] handling current node
	I0914 00:30:24.420182       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:30:24.420188       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:30:24.420363       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:30:24.420386       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca] <==
	I0914 00:26:31.417395       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:26:41.424683       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:26:41.424813       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:26:41.425005       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:26:41.425027       1 main.go:299] handling current node
	I0914 00:26:41.425047       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:26:41.425052       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:26:51.416060       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:26:51.416192       1 main.go:299] handling current node
	I0914 00:26:51.416245       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:26:51.416256       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:26:51.416430       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:26:51.416451       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:27:01.423184       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:27:01.423289       1 main.go:299] handling current node
	I0914 00:27:01.423321       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:27:01.423340       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:27:01.423543       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:27:01.423846       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:27:11.424861       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:27:11.424903       1 main.go:299] handling current node
	I0914 00:27:11.424918       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:27:11.424958       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:27:11.425078       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:27:11.425100       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9] <==
	I0914 00:28:52.005077       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 00:28:52.005367       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 00:28:52.006573       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 00:28:52.006625       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 00:28:52.011153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 00:28:52.011985       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:28:52.014815       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 00:28:52.021942       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 00:28:52.022049       1 aggregator.go:171] initial CRD sync complete...
	I0914 00:28:52.022129       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 00:28:52.022153       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 00:28:52.022175       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:28:52.023035       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 00:28:52.044381       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 00:28:52.067233       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:28:52.067343       1 policy_source.go:224] refreshing policies
	I0914 00:28:52.099907       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:28:52.922100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 00:28:54.333721       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:28:54.459663       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:28:54.473697       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:28:54.535971       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:28:54.543169       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 00:28:55.628531       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 00:28:55.680673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2] <==
	W0914 00:27:12.442885       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.442937       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.442970       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443023       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443063       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443094       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443126       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443172       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0914 00:27:12.444743       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc009795e58)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	W0914 00:27:12.450485       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450528       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450558       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450600       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450638       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450664       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450690       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450718       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450744       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.452615       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.452939       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453031       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453119       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453535       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453570       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0914 00:27:12.454344       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c] <==
	I0914 00:29:52.063220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:29:52.084126       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.227µs"
	I0914 00:29:52.111591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.861µs"
	I0914 00:29:55.342779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:29:56.152080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.937847ms"
	I0914 00:29:56.152218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.862µs"
	I0914 00:30:04.616042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:30:09.722147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:09.739612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:09.982268       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:30:09.982369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.076441       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-209237-m03\" does not exist"
	I0914 00:30:11.078714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:30:11.092124       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.2.0/24"]
	I0914 00:30:11.092163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.092322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.101312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.486146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.804905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:15.425421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:21.458844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:29.313074       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:30:29.313300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:29.325999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:30.361655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	
	
	==> kube-controller-manager [374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0] <==
	I0914 00:24:47.573210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:24:47.593971       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.3.0/24"]
	I0914 00:24:47.594011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	E0914 00:24:47.604080       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.4.0/24"]
	E0914 00:24:47.604187       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-209237-m03"
	E0914 00:24:47.604266       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-209237-m03': failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 00:24:47.604302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:47.609386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:47.828648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:48.148822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:48.322997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:57.966997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:07.138915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:07.139540       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:25:07.147200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:08.256628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.273094       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.275108       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:25:48.277432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:25:48.304736       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.305290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:25:48.354466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.846122ms"
	I0914 00:25:48.354640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.506µs"
	I0914 00:25:53.437182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:26:03.517707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	
	
	==> kube-proxy [c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:28:53.649249       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:28:53.660234       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0914 00:28:53.660512       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:28:53.691905       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:28:53.691942       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:28:53.691972       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:28:53.694191       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:28:53.694515       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:28:53.694556       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:28:53.696216       1 config.go:199] "Starting service config controller"
	I0914 00:28:53.696292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:28:53.696335       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:28:53.696391       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:28:53.697056       1 config.go:328] "Starting node config controller"
	I0914 00:28:53.697658       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:28:53.797202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:28:53.797256       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:28:53.799075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:22:20.332273       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:22:20.354364       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0914 00:22:20.354502       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:22:20.412851       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:22:20.412889       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:22:20.412917       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:22:20.416683       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:22:20.420905       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:22:20.421013       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:22:20.423294       1 config.go:199] "Starting service config controller"
	I0914 00:22:20.423364       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:22:20.423408       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:22:20.423424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:22:20.424204       1 config.go:328] "Starting node config controller"
	I0914 00:22:20.425678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:22:20.524387       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:22:20.524410       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:22:20.525851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d] <==
	I0914 00:28:49.464036       1 serving.go:386] Generated self-signed cert in-memory
	W0914 00:28:51.960391       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 00:28:51.960551       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 00:28:51.960588       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 00:28:51.960619       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 00:28:52.026428       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 00:28:52.026734       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:28:52.029117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:28:52.029201       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:28:52.029989       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 00:28:52.030106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 00:28:52.129688       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0] <==
	E0914 00:22:12.402902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.455395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:22:12.455440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.484728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:22:12.484875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.531688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.531987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.595571       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:22:12.595620       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 00:22:12.599173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:22:12.599218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.626030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.626082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.638384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.638434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.641024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:22:12.641070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.750649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:22:12.750702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.764088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 00:22:12.764137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.879497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 00:22:12.879565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 00:22:15.085465       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 00:27:12.438131       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 00:28:57 multinode-209237 kubelet[2907]: E0914 00:28:57.787920    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273737787123721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:02 multinode-209237 kubelet[2907]: I0914 00:29:02.852577    2907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 14 00:29:07 multinode-209237 kubelet[2907]: E0914 00:29:07.791710    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273747791379707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:07 multinode-209237 kubelet[2907]: E0914 00:29:07.792032    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273747791379707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:17 multinode-209237 kubelet[2907]: E0914 00:29:17.795989    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273757793712197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:17 multinode-209237 kubelet[2907]: E0914 00:29:17.796040    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273757793712197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:27 multinode-209237 kubelet[2907]: E0914 00:29:27.803219    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273767802064161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:27 multinode-209237 kubelet[2907]: E0914 00:29:27.803259    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273767802064161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:37 multinode-209237 kubelet[2907]: E0914 00:29:37.804628    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273777804325221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:37 multinode-209237 kubelet[2907]: E0914 00:29:37.805035    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273777804325221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:47 multinode-209237 kubelet[2907]: E0914 00:29:47.804599    2907 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:29:47 multinode-209237 kubelet[2907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:29:47 multinode-209237 kubelet[2907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:29:47 multinode-209237 kubelet[2907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:29:47 multinode-209237 kubelet[2907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:29:47 multinode-209237 kubelet[2907]: E0914 00:29:47.806744    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273787806545282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:47 multinode-209237 kubelet[2907]: E0914 00:29:47.806819    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273787806545282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:57 multinode-209237 kubelet[2907]: E0914 00:29:57.808725    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273797808447318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:29:57 multinode-209237 kubelet[2907]: E0914 00:29:57.809078    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273797808447318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:07 multinode-209237 kubelet[2907]: E0914 00:30:07.811522    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273807810512063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:07 multinode-209237 kubelet[2907]: E0914 00:30:07.811585    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273807810512063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:17 multinode-209237 kubelet[2907]: E0914 00:30:17.815272    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273817814188699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:17 multinode-209237 kubelet[2907]: E0914 00:30:17.815674    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273817814188699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:27 multinode-209237 kubelet[2907]: E0914 00:30:27.818426    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273827816649116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:30:27 multinode-209237 kubelet[2907]: E0914 00:30:27.819169    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273827816649116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:30:31.718759   44772 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19640-5422/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-209237 -n multinode-209237
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-209237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 stop
E0914 00:32:20.626409   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209237 stop: exit status 82 (2m0.46324763s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-209237-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-209237 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209237 status: exit status 3 (18.781594721s)

                                                
                                                
-- stdout --
	multinode-209237
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209237-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:32:54.988197   45434 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.88:22: connect: no route to host
	E0914 00:32:54.988230   45434 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.88:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-209237 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-209237 -n multinode-209237
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-209237 logs -n 25: (1.468763434s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237:/home/docker/cp-test_multinode-209237-m02_multinode-209237.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237 sudo cat                                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m02_multinode-209237.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03:/home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237-m03 sudo cat                                   | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp testdata/cp-test.txt                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237:/home/docker/cp-test_multinode-209237-m03_multinode-209237.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237 sudo cat                                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02:/home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237-m02 sudo cat                                   | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-209237 node stop m03                                                          | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	| node    | multinode-209237 node start                                                             | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| stop    | -p multinode-209237                                                                     | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| start   | -p multinode-209237                                                                     | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:27 UTC | 14 Sep 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC |                     |
	| node    | multinode-209237 node delete                                                            | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC | 14 Sep 24 00:30 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-209237 stop                                                                   | multinode-209237 | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:27:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:27:11.604178   43671 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:27:11.604295   43671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:27:11.604310   43671 out.go:358] Setting ErrFile to fd 2...
	I0914 00:27:11.604317   43671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:27:11.604511   43671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:27:11.605031   43671 out.go:352] Setting JSON to false
	I0914 00:27:11.605930   43671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4178,"bootTime":1726269454,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:27:11.606018   43671 start.go:139] virtualization: kvm guest
	I0914 00:27:11.608022   43671 out.go:177] * [multinode-209237] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:27:11.609378   43671 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:27:11.609454   43671 notify.go:220] Checking for updates...
	I0914 00:27:11.611401   43671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:27:11.612455   43671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:27:11.613489   43671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:27:11.614402   43671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:27:11.615386   43671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:27:11.616702   43671 config.go:182] Loaded profile config "multinode-209237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:27:11.616834   43671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:27:11.617274   43671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:27:11.617338   43671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:27:11.632610   43671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0914 00:27:11.633091   43671 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:27:11.633678   43671 main.go:141] libmachine: Using API Version  1
	I0914 00:27:11.633703   43671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:27:11.634064   43671 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:27:11.634252   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.672431   43671 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:27:11.673559   43671 start.go:297] selected driver: kvm2
	I0914 00:27:11.673576   43671 start.go:901] validating driver "kvm2" against &{Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:27:11.673705   43671 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:27:11.674003   43671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:27:11.674071   43671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:27:11.689167   43671 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:27:11.689871   43671 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:27:11.689904   43671 cni.go:84] Creating CNI manager for ""
	I0914 00:27:11.689960   43671 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 00:27:11.690049   43671 start.go:340] cluster config:
	{Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:27:11.690180   43671 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:27:11.692040   43671 out.go:177] * Starting "multinode-209237" primary control-plane node in "multinode-209237" cluster
	I0914 00:27:11.693130   43671 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:27:11.693171   43671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:27:11.693184   43671 cache.go:56] Caching tarball of preloaded images
	I0914 00:27:11.693296   43671 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:27:11.693309   43671 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:27:11.693438   43671 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/config.json ...
	I0914 00:27:11.693659   43671 start.go:360] acquireMachinesLock for multinode-209237: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:27:11.693704   43671 start.go:364] duration metric: took 26.378µs to acquireMachinesLock for "multinode-209237"
	I0914 00:27:11.693731   43671 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:27:11.693740   43671 fix.go:54] fixHost starting: 
	I0914 00:27:11.693995   43671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:27:11.694026   43671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:27:11.709670   43671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0914 00:27:11.710205   43671 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:27:11.710692   43671 main.go:141] libmachine: Using API Version  1
	I0914 00:27:11.710716   43671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:27:11.711060   43671 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:27:11.711288   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.711429   43671 main.go:141] libmachine: (multinode-209237) Calling .GetState
	I0914 00:27:11.713009   43671 fix.go:112] recreateIfNeeded on multinode-209237: state=Running err=<nil>
	W0914 00:27:11.713037   43671 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:27:11.714723   43671 out.go:177] * Updating the running kvm2 "multinode-209237" VM ...
	I0914 00:27:11.715830   43671 machine.go:93] provisionDockerMachine start ...
	I0914 00:27:11.715849   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:27:11.716006   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.718829   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.719259   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.719283   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.719394   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.719541   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.719694   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.719818   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.719941   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.720152   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.720169   43671 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:27:11.824597   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209237
	
	I0914 00:27:11.824621   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:11.824854   43671 buildroot.go:166] provisioning hostname "multinode-209237"
	I0914 00:27:11.824887   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:11.825057   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.827842   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.828251   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.828273   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.828435   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.828623   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.828902   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.829028   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.829153   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.829328   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.829340   43671 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-209237 && echo "multinode-209237" | sudo tee /etc/hostname
	I0914 00:27:11.943296   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209237
	
	I0914 00:27:11.943338   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:11.945897   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.946220   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:11.946252   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:11.946427   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:11.946602   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.946764   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:11.946900   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:11.947050   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:11.947283   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:11.947301   43671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-209237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-209237/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-209237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:27:12.048416   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:27:12.048445   43671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:27:12.048480   43671 buildroot.go:174] setting up certificates
	I0914 00:27:12.048489   43671 provision.go:84] configureAuth start
	I0914 00:27:12.048502   43671 main.go:141] libmachine: (multinode-209237) Calling .GetMachineName
	I0914 00:27:12.048785   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:27:12.051597   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.052025   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.052069   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.052152   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.054562   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.054917   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.054943   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.055090   43671 provision.go:143] copyHostCerts
	I0914 00:27:12.055126   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:27:12.055154   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:27:12.055165   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:27:12.055235   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:27:12.055338   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:27:12.055357   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:27:12.055361   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:27:12.055385   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:27:12.055447   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:27:12.055463   43671 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:27:12.055468   43671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:27:12.055505   43671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:27:12.055567   43671 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.multinode-209237 san=[127.0.0.1 192.168.39.214 localhost minikube multinode-209237]
	I0914 00:27:12.137208   43671 provision.go:177] copyRemoteCerts
	I0914 00:27:12.137289   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:27:12.137322   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.140041   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.140403   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.140430   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.140645   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:12.140804   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.140961   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:12.141082   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:27:12.222530   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 00:27:12.222614   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:27:12.248391   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 00:27:12.248457   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 00:27:12.274271   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 00:27:12.274342   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 00:27:12.299419   43671 provision.go:87] duration metric: took 250.913735ms to configureAuth
	I0914 00:27:12.299451   43671 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:27:12.299767   43671 config.go:182] Loaded profile config "multinode-209237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:27:12.299877   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:27:12.302632   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.302993   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:27:12.303029   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:27:12.303183   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:27:12.303384   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.303525   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:27:12.303657   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:27:12.303848   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:27:12.304060   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:27:12.304081   43671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:28:43.095199   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:28:43.095232   43671 machine.go:96] duration metric: took 1m31.379388341s to provisionDockerMachine
	I0914 00:28:43.095245   43671 start.go:293] postStartSetup for "multinode-209237" (driver="kvm2")
	I0914 00:28:43.095257   43671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:28:43.095277   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.095611   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:28:43.095646   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.098711   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.099117   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.099148   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.099295   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.099464   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.099620   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.099760   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.187315   43671 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:28:43.191225   43671 command_runner.go:130] > NAME=Buildroot
	I0914 00:28:43.191245   43671 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0914 00:28:43.191249   43671 command_runner.go:130] > ID=buildroot
	I0914 00:28:43.191254   43671 command_runner.go:130] > VERSION_ID=2023.02.9
	I0914 00:28:43.191258   43671 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0914 00:28:43.191295   43671 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:28:43.191314   43671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:28:43.191378   43671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:28:43.191458   43671 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:28:43.191470   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /etc/ssl/certs/126022.pem
	I0914 00:28:43.191549   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:28:43.200932   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:28:43.223950   43671 start.go:296] duration metric: took 128.692129ms for postStartSetup
	I0914 00:28:43.223988   43671 fix.go:56] duration metric: took 1m31.530248803s for fixHost
	I0914 00:28:43.224008   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.227098   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.227550   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.227580   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.227730   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.228027   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.228181   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.228337   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.228480   43671 main.go:141] libmachine: Using SSH client type: native
	I0914 00:28:43.228685   43671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0914 00:28:43.228695   43671 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:28:43.328407   43671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726273723.305191186
	
	I0914 00:28:43.328429   43671 fix.go:216] guest clock: 1726273723.305191186
	I0914 00:28:43.328438   43671 fix.go:229] Guest: 2024-09-14 00:28:43.305191186 +0000 UTC Remote: 2024-09-14 00:28:43.22399252 +0000 UTC m=+91.654910852 (delta=81.198666ms)
	I0914 00:28:43.328477   43671 fix.go:200] guest clock delta is within tolerance: 81.198666ms
	I0914 00:28:43.328492   43671 start.go:83] releasing machines lock for "multinode-209237", held for 1m31.63477863s
	I0914 00:28:43.328513   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.328820   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:28:43.331920   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.332287   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.332307   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.332519   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333145   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333333   43671 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:28:43.333430   43671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:28:43.333483   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.333553   43671 ssh_runner.go:195] Run: cat /version.json
	I0914 00:28:43.333578   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:28:43.336090   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336390   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336472   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.336508   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336672   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.336785   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:43.336814   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:43.336815   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.336954   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:28:43.336963   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.337129   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.337162   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:28:43.337291   43671 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:28:43.337428   43671 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:28:43.413229   43671 command_runner.go:130] > {"iso_version": "v1.34.0-1726243933-19640", "kicbase_version": "v0.0.45-1726193793-19634", "minikube_version": "v1.34.0", "commit": "e7c5cc0da7d849951636fa2daac0332e4074a4f1"}
	I0914 00:28:43.413390   43671 ssh_runner.go:195] Run: systemctl --version
	I0914 00:28:43.452484   43671 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 00:28:43.452532   43671 command_runner.go:130] > systemd 252 (252)
	I0914 00:28:43.452563   43671 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0914 00:28:43.452620   43671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:28:43.613356   43671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:28:43.621004   43671 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 00:28:43.621063   43671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:28:43.621106   43671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:28:43.630009   43671 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 00:28:43.630037   43671 start.go:495] detecting cgroup driver to use...
	I0914 00:28:43.630100   43671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:28:43.645753   43671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:28:43.660142   43671 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:28:43.660201   43671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:28:43.673974   43671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:28:43.687282   43671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:28:43.845933   43671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:28:43.995728   43671 docker.go:233] disabling docker service ...
	I0914 00:28:43.995815   43671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:28:44.011990   43671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:28:44.025215   43671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:28:44.165065   43671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:28:44.301464   43671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:28:44.314952   43671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:28:44.335755   43671 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 00:28:44.335815   43671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:28:44.335857   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.346352   43671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:28:44.346417   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.356690   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.366779   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.376947   43671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:28:44.387218   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.397320   43671 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.408117   43671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:28:44.418435   43671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:28:44.427853   43671 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 00:28:44.427924   43671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:28:44.437335   43671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:28:44.593998   43671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:28:44.796545   43671 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:28:44.796625   43671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:28:44.801401   43671 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 00:28:44.801426   43671 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 00:28:44.801433   43671 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0914 00:28:44.801440   43671 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 00:28:44.801446   43671 command_runner.go:130] > Access: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801452   43671 command_runner.go:130] > Modify: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801457   43671 command_runner.go:130] > Change: 2024-09-14 00:28:44.665500227 +0000
	I0914 00:28:44.801461   43671 command_runner.go:130] >  Birth: -
	I0914 00:28:44.801478   43671 start.go:563] Will wait 60s for crictl version
	I0914 00:28:44.801532   43671 ssh_runner.go:195] Run: which crictl
	I0914 00:28:44.805469   43671 command_runner.go:130] > /usr/bin/crictl
	I0914 00:28:44.805562   43671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:28:44.847205   43671 command_runner.go:130] > Version:  0.1.0
	I0914 00:28:44.847237   43671 command_runner.go:130] > RuntimeName:  cri-o
	I0914 00:28:44.847244   43671 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0914 00:28:44.847249   43671 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 00:28:44.847267   43671 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:28:44.847352   43671 ssh_runner.go:195] Run: crio --version
	I0914 00:28:44.876953   43671 command_runner.go:130] > crio version 1.29.1
	I0914 00:28:44.876979   43671 command_runner.go:130] > Version:        1.29.1
	I0914 00:28:44.876985   43671 command_runner.go:130] > GitCommit:      unknown
	I0914 00:28:44.876989   43671 command_runner.go:130] > GitCommitDate:  unknown
	I0914 00:28:44.876993   43671 command_runner.go:130] > GitTreeState:   clean
	I0914 00:28:44.876999   43671 command_runner.go:130] > BuildDate:      2024-09-13T21:54:05Z
	I0914 00:28:44.877003   43671 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 00:28:44.877006   43671 command_runner.go:130] > Compiler:       gc
	I0914 00:28:44.877011   43671 command_runner.go:130] > Platform:       linux/amd64
	I0914 00:28:44.877014   43671 command_runner.go:130] > Linkmode:       dynamic
	I0914 00:28:44.877018   43671 command_runner.go:130] > BuildTags:      
	I0914 00:28:44.877022   43671 command_runner.go:130] >   containers_image_ostree_stub
	I0914 00:28:44.877026   43671 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 00:28:44.877030   43671 command_runner.go:130] >   btrfs_noversion
	I0914 00:28:44.877035   43671 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 00:28:44.877039   43671 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 00:28:44.877043   43671 command_runner.go:130] >   seccomp
	I0914 00:28:44.877047   43671 command_runner.go:130] > LDFlags:          unknown
	I0914 00:28:44.877052   43671 command_runner.go:130] > SeccompEnabled:   true
	I0914 00:28:44.877067   43671 command_runner.go:130] > AppArmorEnabled:  false
	I0914 00:28:44.877129   43671 ssh_runner.go:195] Run: crio --version
	I0914 00:28:44.904807   43671 command_runner.go:130] > crio version 1.29.1
	I0914 00:28:44.904827   43671 command_runner.go:130] > Version:        1.29.1
	I0914 00:28:44.904833   43671 command_runner.go:130] > GitCommit:      unknown
	I0914 00:28:44.904837   43671 command_runner.go:130] > GitCommitDate:  unknown
	I0914 00:28:44.904841   43671 command_runner.go:130] > GitTreeState:   clean
	I0914 00:28:44.904860   43671 command_runner.go:130] > BuildDate:      2024-09-13T21:54:05Z
	I0914 00:28:44.904864   43671 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 00:28:44.904868   43671 command_runner.go:130] > Compiler:       gc
	I0914 00:28:44.904873   43671 command_runner.go:130] > Platform:       linux/amd64
	I0914 00:28:44.904876   43671 command_runner.go:130] > Linkmode:       dynamic
	I0914 00:28:44.904881   43671 command_runner.go:130] > BuildTags:      
	I0914 00:28:44.904885   43671 command_runner.go:130] >   containers_image_ostree_stub
	I0914 00:28:44.904889   43671 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 00:28:44.904893   43671 command_runner.go:130] >   btrfs_noversion
	I0914 00:28:44.904898   43671 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 00:28:44.904903   43671 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 00:28:44.904906   43671 command_runner.go:130] >   seccomp
	I0914 00:28:44.904911   43671 command_runner.go:130] > LDFlags:          unknown
	I0914 00:28:44.904917   43671 command_runner.go:130] > SeccompEnabled:   true
	I0914 00:28:44.904921   43671 command_runner.go:130] > AppArmorEnabled:  false
	I0914 00:28:44.907889   43671 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:28:44.909240   43671 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:28:44.912019   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:44.912345   43671 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:28:44.912379   43671 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:28:44.912579   43671 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:28:44.916677   43671 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 00:28:44.916782   43671 kubeadm.go:883] updating cluster {Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:28:44.916950   43671 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:28:44.917011   43671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:28:44.955755   43671 command_runner.go:130] > {
	I0914 00:28:44.955782   43671 command_runner.go:130] >   "images": [
	I0914 00:28:44.955806   43671 command_runner.go:130] >     {
	I0914 00:28:44.955817   43671 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 00:28:44.955824   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.955832   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 00:28:44.955837   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955843   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.955863   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 00:28:44.955877   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 00:28:44.955890   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955900   43671 command_runner.go:130] >       "size": "87190579",
	I0914 00:28:44.955906   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.955913   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.955924   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.955931   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.955938   43671 command_runner.go:130] >     },
	I0914 00:28:44.955944   43671 command_runner.go:130] >     {
	I0914 00:28:44.955956   43671 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 00:28:44.955963   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.955972   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 00:28:44.955981   43671 command_runner.go:130] >       ],
	I0914 00:28:44.955988   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956001   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 00:28:44.956014   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 00:28:44.956022   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956030   43671 command_runner.go:130] >       "size": "1363676",
	I0914 00:28:44.956039   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956051   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956060   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956067   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956075   43671 command_runner.go:130] >     },
	I0914 00:28:44.956082   43671 command_runner.go:130] >     {
	I0914 00:28:44.956092   43671 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 00:28:44.956101   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956111   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 00:28:44.956120   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956127   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956142   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 00:28:44.956158   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 00:28:44.956167   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956175   43671 command_runner.go:130] >       "size": "31470524",
	I0914 00:28:44.956182   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956198   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956208   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956217   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956223   43671 command_runner.go:130] >     },
	I0914 00:28:44.956231   43671 command_runner.go:130] >     {
	I0914 00:28:44.956243   43671 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 00:28:44.956251   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956261   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 00:28:44.956270   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956277   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956292   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 00:28:44.956313   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 00:28:44.956319   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956327   43671 command_runner.go:130] >       "size": "63273227",
	I0914 00:28:44.956336   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.956346   43671 command_runner.go:130] >       "username": "nonroot",
	I0914 00:28:44.956356   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956365   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956372   43671 command_runner.go:130] >     },
	I0914 00:28:44.956379   43671 command_runner.go:130] >     {
	I0914 00:28:44.956392   43671 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 00:28:44.956399   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956410   43671 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 00:28:44.956419   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956426   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956441   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 00:28:44.956455   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 00:28:44.956464   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956471   43671 command_runner.go:130] >       "size": "149009664",
	I0914 00:28:44.956480   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956487   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956496   43671 command_runner.go:130] >       },
	I0914 00:28:44.956503   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956520   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956530   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956537   43671 command_runner.go:130] >     },
	I0914 00:28:44.956558   43671 command_runner.go:130] >     {
	I0914 00:28:44.956579   43671 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 00:28:44.956588   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956598   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 00:28:44.956606   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956613   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956628   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 00:28:44.956643   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 00:28:44.956652   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956659   43671 command_runner.go:130] >       "size": "95237600",
	I0914 00:28:44.956665   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956672   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956681   43671 command_runner.go:130] >       },
	I0914 00:28:44.956688   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956698   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956707   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956713   43671 command_runner.go:130] >     },
	I0914 00:28:44.956720   43671 command_runner.go:130] >     {
	I0914 00:28:44.956731   43671 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 00:28:44.956741   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956750   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 00:28:44.956758   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956765   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956779   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 00:28:44.956795   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 00:28:44.956804   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956811   43671 command_runner.go:130] >       "size": "89437508",
	I0914 00:28:44.956821   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.956829   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.956837   43671 command_runner.go:130] >       },
	I0914 00:28:44.956851   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.956860   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.956868   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.956875   43671 command_runner.go:130] >     },
	I0914 00:28:44.956881   43671 command_runner.go:130] >     {
	I0914 00:28:44.956902   43671 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 00:28:44.956910   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.956920   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 00:28:44.956928   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956936   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.956968   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 00:28:44.956981   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 00:28:44.956987   43671 command_runner.go:130] >       ],
	I0914 00:28:44.956992   43671 command_runner.go:130] >       "size": "92733849",
	I0914 00:28:44.956999   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.957008   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957013   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957018   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.957023   43671 command_runner.go:130] >     },
	I0914 00:28:44.957027   43671 command_runner.go:130] >     {
	I0914 00:28:44.957036   43671 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 00:28:44.957041   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.957048   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 00:28:44.957053   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957061   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.957072   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 00:28:44.957084   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 00:28:44.957091   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957098   43671 command_runner.go:130] >       "size": "68420934",
	I0914 00:28:44.957105   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.957111   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.957117   43671 command_runner.go:130] >       },
	I0914 00:28:44.957123   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957138   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957146   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.957151   43671 command_runner.go:130] >     },
	I0914 00:28:44.957158   43671 command_runner.go:130] >     {
	I0914 00:28:44.957172   43671 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 00:28:44.957181   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.957189   43671 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 00:28:44.957198   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957205   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.957219   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 00:28:44.957234   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 00:28:44.957242   43671 command_runner.go:130] >       ],
	I0914 00:28:44.957250   43671 command_runner.go:130] >       "size": "742080",
	I0914 00:28:44.957259   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.957267   43671 command_runner.go:130] >         "value": "65535"
	I0914 00:28:44.957274   43671 command_runner.go:130] >       },
	I0914 00:28:44.957282   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.957291   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.957297   43671 command_runner.go:130] >       "pinned": true
	I0914 00:28:44.957304   43671 command_runner.go:130] >     }
	I0914 00:28:44.957311   43671 command_runner.go:130] >   ]
	I0914 00:28:44.957317   43671 command_runner.go:130] > }
	I0914 00:28:44.957495   43671 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:28:44.957508   43671 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:28:44.957571   43671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:28:44.990992   43671 command_runner.go:130] > {
	I0914 00:28:44.991017   43671 command_runner.go:130] >   "images": [
	I0914 00:28:44.991021   43671 command_runner.go:130] >     {
	I0914 00:28:44.991028   43671 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 00:28:44.991033   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991038   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 00:28:44.991043   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991047   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991059   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 00:28:44.991078   43671 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 00:28:44.991087   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991094   43671 command_runner.go:130] >       "size": "87190579",
	I0914 00:28:44.991102   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991106   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991118   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991125   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991128   43671 command_runner.go:130] >     },
	I0914 00:28:44.991134   43671 command_runner.go:130] >     {
	I0914 00:28:44.991143   43671 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 00:28:44.991152   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991163   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 00:28:44.991173   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991179   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991194   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 00:28:44.991205   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 00:28:44.991211   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991216   43671 command_runner.go:130] >       "size": "1363676",
	I0914 00:28:44.991224   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991236   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991246   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991256   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991264   43671 command_runner.go:130] >     },
	I0914 00:28:44.991269   43671 command_runner.go:130] >     {
	I0914 00:28:44.991282   43671 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 00:28:44.991288   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991296   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 00:28:44.991303   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991308   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991325   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 00:28:44.991341   43671 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 00:28:44.991349   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991359   43671 command_runner.go:130] >       "size": "31470524",
	I0914 00:28:44.991375   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991383   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991388   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991395   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991401   43671 command_runner.go:130] >     },
	I0914 00:28:44.991412   43671 command_runner.go:130] >     {
	I0914 00:28:44.991425   43671 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 00:28:44.991434   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991450   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 00:28:44.991459   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991468   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991478   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 00:28:44.991500   43671 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 00:28:44.991510   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991520   43671 command_runner.go:130] >       "size": "63273227",
	I0914 00:28:44.991529   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.991539   43671 command_runner.go:130] >       "username": "nonroot",
	I0914 00:28:44.991552   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991560   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991563   43671 command_runner.go:130] >     },
	I0914 00:28:44.991568   43671 command_runner.go:130] >     {
	I0914 00:28:44.991581   43671 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 00:28:44.991591   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991601   43671 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 00:28:44.991610   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991619   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991633   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 00:28:44.991644   43671 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 00:28:44.991651   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991657   43671 command_runner.go:130] >       "size": "149009664",
	I0914 00:28:44.991666   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.991673   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.991682   43671 command_runner.go:130] >       },
	I0914 00:28:44.991696   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991707   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991716   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991724   43671 command_runner.go:130] >     },
	I0914 00:28:44.991730   43671 command_runner.go:130] >     {
	I0914 00:28:44.991737   43671 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 00:28:44.991745   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991756   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 00:28:44.991764   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991774   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991799   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 00:28:44.991815   43671 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 00:28:44.991823   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991833   43671 command_runner.go:130] >       "size": "95237600",
	I0914 00:28:44.991842   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.991851   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.991858   43671 command_runner.go:130] >       },
	I0914 00:28:44.991863   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.991871   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.991880   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.991889   43671 command_runner.go:130] >     },
	I0914 00:28:44.991897   43671 command_runner.go:130] >     {
	I0914 00:28:44.991907   43671 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 00:28:44.991917   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.991928   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 00:28:44.991936   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991942   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.991952   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 00:28:44.991972   43671 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 00:28:44.991984   43671 command_runner.go:130] >       ],
	I0914 00:28:44.991993   43671 command_runner.go:130] >       "size": "89437508",
	I0914 00:28:44.992002   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992011   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.992025   43671 command_runner.go:130] >       },
	I0914 00:28:44.992032   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992037   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992046   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992054   43671 command_runner.go:130] >     },
	I0914 00:28:44.992063   43671 command_runner.go:130] >     {
	I0914 00:28:44.992076   43671 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 00:28:44.992085   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992096   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 00:28:44.992103   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992112   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992138   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 00:28:44.992153   43671 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 00:28:44.992161   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992168   43671 command_runner.go:130] >       "size": "92733849",
	I0914 00:28:44.992177   43671 command_runner.go:130] >       "uid": null,
	I0914 00:28:44.992186   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992196   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992203   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992206   43671 command_runner.go:130] >     },
	I0914 00:28:44.992212   43671 command_runner.go:130] >     {
	I0914 00:28:44.992224   43671 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 00:28:44.992233   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992244   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 00:28:44.992253   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992262   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992277   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 00:28:44.992288   43671 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 00:28:44.992294   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992301   43671 command_runner.go:130] >       "size": "68420934",
	I0914 00:28:44.992309   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992318   43671 command_runner.go:130] >         "value": "0"
	I0914 00:28:44.992327   43671 command_runner.go:130] >       },
	I0914 00:28:44.992344   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992353   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992362   43671 command_runner.go:130] >       "pinned": false
	I0914 00:28:44.992370   43671 command_runner.go:130] >     },
	I0914 00:28:44.992378   43671 command_runner.go:130] >     {
	I0914 00:28:44.992384   43671 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 00:28:44.992392   43671 command_runner.go:130] >       "repoTags": [
	I0914 00:28:44.992402   43671 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 00:28:44.992410   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992421   43671 command_runner.go:130] >       "repoDigests": [
	I0914 00:28:44.992435   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 00:28:44.992460   43671 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 00:28:44.992467   43671 command_runner.go:130] >       ],
	I0914 00:28:44.992472   43671 command_runner.go:130] >       "size": "742080",
	I0914 00:28:44.992477   43671 command_runner.go:130] >       "uid": {
	I0914 00:28:44.992486   43671 command_runner.go:130] >         "value": "65535"
	I0914 00:28:44.992495   43671 command_runner.go:130] >       },
	I0914 00:28:44.992505   43671 command_runner.go:130] >       "username": "",
	I0914 00:28:44.992514   43671 command_runner.go:130] >       "spec": null,
	I0914 00:28:44.992523   43671 command_runner.go:130] >       "pinned": true
	I0914 00:28:44.992531   43671 command_runner.go:130] >     }
	I0914 00:28:44.992545   43671 command_runner.go:130] >   ]
	I0914 00:28:44.992557   43671 command_runner.go:130] > }
	I0914 00:28:44.992724   43671 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:28:44.992736   43671 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:28:44.992745   43671 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.31.1 crio true true} ...
	I0914 00:28:44.992863   43671 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-209237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:28:44.992955   43671 ssh_runner.go:195] Run: crio config
	I0914 00:28:45.035177   43671 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 00:28:45.035205   43671 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 00:28:45.035215   43671 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 00:28:45.035220   43671 command_runner.go:130] > #
	I0914 00:28:45.035229   43671 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 00:28:45.035238   43671 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 00:28:45.035247   43671 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 00:28:45.035258   43671 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 00:28:45.035264   43671 command_runner.go:130] > # reload'.
	I0914 00:28:45.035275   43671 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 00:28:45.035288   43671 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 00:28:45.035299   43671 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 00:28:45.035312   43671 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 00:28:45.035319   43671 command_runner.go:130] > [crio]
	I0914 00:28:45.035329   43671 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 00:28:45.035347   43671 command_runner.go:130] > # containers images, in this directory.
	I0914 00:28:45.035519   43671 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 00:28:45.035569   43671 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 00:28:45.035636   43671 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 00:28:45.035653   43671 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0914 00:28:45.035734   43671 command_runner.go:130] > # imagestore = ""
	I0914 00:28:45.035759   43671 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 00:28:45.035771   43671 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 00:28:45.035875   43671 command_runner.go:130] > storage_driver = "overlay"
	I0914 00:28:45.035886   43671 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 00:28:45.035892   43671 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 00:28:45.035897   43671 command_runner.go:130] > storage_option = [
	I0914 00:28:45.036034   43671 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 00:28:45.036095   43671 command_runner.go:130] > ]
	I0914 00:28:45.036110   43671 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 00:28:45.036119   43671 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 00:28:45.036350   43671 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 00:28:45.036366   43671 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 00:28:45.036376   43671 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 00:28:45.036383   43671 command_runner.go:130] > # always happen on a node reboot
	I0914 00:28:45.036604   43671 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 00:28:45.036657   43671 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 00:28:45.036675   43671 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 00:28:45.036682   43671 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 00:28:45.036814   43671 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0914 00:28:45.036832   43671 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 00:28:45.036845   43671 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 00:28:45.037020   43671 command_runner.go:130] > # internal_wipe = true
	I0914 00:28:45.037038   43671 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0914 00:28:45.037047   43671 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0914 00:28:45.037255   43671 command_runner.go:130] > # internal_repair = false
	I0914 00:28:45.037266   43671 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 00:28:45.037273   43671 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 00:28:45.037279   43671 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 00:28:45.037498   43671 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 00:28:45.037509   43671 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 00:28:45.037519   43671 command_runner.go:130] > [crio.api]
	I0914 00:28:45.037524   43671 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 00:28:45.037781   43671 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 00:28:45.037801   43671 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 00:28:45.038013   43671 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 00:28:45.038030   43671 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 00:28:45.038038   43671 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 00:28:45.038247   43671 command_runner.go:130] > # stream_port = "0"
	I0914 00:28:45.038262   43671 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 00:28:45.038490   43671 command_runner.go:130] > # stream_enable_tls = false
	I0914 00:28:45.038514   43671 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 00:28:45.038733   43671 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 00:28:45.038750   43671 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 00:28:45.038760   43671 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 00:28:45.038766   43671 command_runner.go:130] > # minutes.
	I0914 00:28:45.038924   43671 command_runner.go:130] > # stream_tls_cert = ""
	I0914 00:28:45.038934   43671 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 00:28:45.038940   43671 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 00:28:45.039092   43671 command_runner.go:130] > # stream_tls_key = ""
	I0914 00:28:45.039106   43671 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 00:28:45.039116   43671 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 00:28:45.039144   43671 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 00:28:45.039283   43671 command_runner.go:130] > # stream_tls_ca = ""
	I0914 00:28:45.039295   43671 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 00:28:45.039400   43671 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 00:28:45.039420   43671 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 00:28:45.039534   43671 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 00:28:45.039549   43671 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 00:28:45.039567   43671 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 00:28:45.039576   43671 command_runner.go:130] > [crio.runtime]
	I0914 00:28:45.039585   43671 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 00:28:45.039596   43671 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 00:28:45.039602   43671 command_runner.go:130] > # "nofile=1024:2048"
	I0914 00:28:45.039613   43671 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 00:28:45.039644   43671 command_runner.go:130] > # default_ulimits = [
	I0914 00:28:45.039917   43671 command_runner.go:130] > # ]
	I0914 00:28:45.039983   43671 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 00:28:45.040094   43671 command_runner.go:130] > # no_pivot = false
	I0914 00:28:45.040108   43671 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 00:28:45.040117   43671 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 00:28:45.040345   43671 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 00:28:45.040363   43671 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 00:28:45.040369   43671 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 00:28:45.040377   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 00:28:45.040470   43671 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 00:28:45.040482   43671 command_runner.go:130] > # Cgroup setting for conmon
	I0914 00:28:45.040493   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 00:28:45.040596   43671 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 00:28:45.040613   43671 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 00:28:45.040621   43671 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 00:28:45.040632   43671 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 00:28:45.040640   43671 command_runner.go:130] > conmon_env = [
	I0914 00:28:45.040752   43671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 00:28:45.040788   43671 command_runner.go:130] > ]
	I0914 00:28:45.040800   43671 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 00:28:45.040812   43671 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 00:28:45.040821   43671 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 00:28:45.040908   43671 command_runner.go:130] > # default_env = [
	I0914 00:28:45.041059   43671 command_runner.go:130] > # ]
	I0914 00:28:45.041076   43671 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 00:28:45.041088   43671 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0914 00:28:45.041300   43671 command_runner.go:130] > # selinux = false
	I0914 00:28:45.041323   43671 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 00:28:45.041333   43671 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 00:28:45.041346   43671 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 00:28:45.041486   43671 command_runner.go:130] > # seccomp_profile = ""
	I0914 00:28:45.041501   43671 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 00:28:45.041509   43671 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 00:28:45.041518   43671 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 00:28:45.041524   43671 command_runner.go:130] > # which might increase security.
	I0914 00:28:45.041532   43671 command_runner.go:130] > # This option is currently deprecated,
	I0914 00:28:45.041542   43671 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0914 00:28:45.041616   43671 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 00:28:45.041634   43671 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 00:28:45.041648   43671 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 00:28:45.041660   43671 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 00:28:45.041667   43671 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 00:28:45.041672   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.041883   43671 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 00:28:45.041896   43671 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 00:28:45.041903   43671 command_runner.go:130] > # the cgroup blockio controller.
	I0914 00:28:45.042086   43671 command_runner.go:130] > # blockio_config_file = ""
	I0914 00:28:45.042099   43671 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0914 00:28:45.042105   43671 command_runner.go:130] > # blockio parameters.
	I0914 00:28:45.042330   43671 command_runner.go:130] > # blockio_reload = false
	I0914 00:28:45.042344   43671 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 00:28:45.042350   43671 command_runner.go:130] > # irqbalance daemon.
	I0914 00:28:45.042566   43671 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 00:28:45.042579   43671 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0914 00:28:45.042589   43671 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0914 00:28:45.042600   43671 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0914 00:28:45.042828   43671 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0914 00:28:45.042841   43671 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 00:28:45.042850   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.043000   43671 command_runner.go:130] > # rdt_config_file = ""
	I0914 00:28:45.043011   43671 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 00:28:45.043128   43671 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 00:28:45.043171   43671 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 00:28:45.043318   43671 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 00:28:45.043341   43671 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 00:28:45.043352   43671 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 00:28:45.043362   43671 command_runner.go:130] > # will be added.
	I0914 00:28:45.043448   43671 command_runner.go:130] > # default_capabilities = [
	I0914 00:28:45.043855   43671 command_runner.go:130] > # 	"CHOWN",
	I0914 00:28:45.044036   43671 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 00:28:45.044370   43671 command_runner.go:130] > # 	"FSETID",
	I0914 00:28:45.044611   43671 command_runner.go:130] > # 	"FOWNER",
	I0914 00:28:45.045001   43671 command_runner.go:130] > # 	"SETGID",
	I0914 00:28:45.045150   43671 command_runner.go:130] > # 	"SETUID",
	I0914 00:28:45.045381   43671 command_runner.go:130] > # 	"SETPCAP",
	I0914 00:28:45.045580   43671 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 00:28:45.045814   43671 command_runner.go:130] > # 	"KILL",
	I0914 00:28:45.045923   43671 command_runner.go:130] > # ]
	I0914 00:28:45.045939   43671 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 00:28:45.045950   43671 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 00:28:45.046177   43671 command_runner.go:130] > # add_inheritable_capabilities = false
	I0914 00:28:45.046191   43671 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 00:28:45.046200   43671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 00:28:45.046206   43671 command_runner.go:130] > default_sysctls = [
	I0914 00:28:45.046267   43671 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0914 00:28:45.046344   43671 command_runner.go:130] > ]
	I0914 00:28:45.046355   43671 command_runner.go:130] > # List of devices on the host that a
	I0914 00:28:45.046366   43671 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 00:28:45.046463   43671 command_runner.go:130] > # allowed_devices = [
	I0914 00:28:45.046600   43671 command_runner.go:130] > # 	"/dev/fuse",
	I0914 00:28:45.046794   43671 command_runner.go:130] > # ]
	I0914 00:28:45.046802   43671 command_runner.go:130] > # List of additional devices. specified as
	I0914 00:28:45.046813   43671 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 00:28:45.046824   43671 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 00:28:45.046835   43671 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 00:28:45.046872   43671 command_runner.go:130] > # additional_devices = [
	I0914 00:28:45.046995   43671 command_runner.go:130] > # ]
	I0914 00:28:45.047004   43671 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 00:28:45.047112   43671 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 00:28:45.047242   43671 command_runner.go:130] > # 	"/etc/cdi",
	I0914 00:28:45.047389   43671 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 00:28:45.048684   43671 command_runner.go:130] > # ]
	I0914 00:28:45.048700   43671 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 00:28:45.048710   43671 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 00:28:45.048715   43671 command_runner.go:130] > # Defaults to false.
	I0914 00:28:45.048733   43671 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 00:28:45.048744   43671 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 00:28:45.048754   43671 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 00:28:45.048763   43671 command_runner.go:130] > # hooks_dir = [
	I0914 00:28:45.048770   43671 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 00:28:45.048777   43671 command_runner.go:130] > # ]
	I0914 00:28:45.048788   43671 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 00:28:45.048801   43671 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 00:28:45.048810   43671 command_runner.go:130] > # its default mounts from the following two files:
	I0914 00:28:45.048818   43671 command_runner.go:130] > #
	I0914 00:28:45.048829   43671 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 00:28:45.048843   43671 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 00:28:45.048852   43671 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 00:28:45.048860   43671 command_runner.go:130] > #
	I0914 00:28:45.048872   43671 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 00:28:45.048885   43671 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 00:28:45.048899   43671 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 00:28:45.048910   43671 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 00:28:45.048918   43671 command_runner.go:130] > #
	I0914 00:28:45.048926   43671 command_runner.go:130] > # default_mounts_file = ""
	I0914 00:28:45.048939   43671 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 00:28:45.048951   43671 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 00:28:45.048960   43671 command_runner.go:130] > pids_limit = 1024
	I0914 00:28:45.048971   43671 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 00:28:45.048984   43671 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 00:28:45.048997   43671 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 00:28:45.049014   43671 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 00:28:45.049023   43671 command_runner.go:130] > # log_size_max = -1
	I0914 00:28:45.049036   43671 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0914 00:28:45.049044   43671 command_runner.go:130] > # log_to_journald = false
	I0914 00:28:45.049056   43671 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 00:28:45.049066   43671 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 00:28:45.049076   43671 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 00:28:45.049126   43671 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 00:28:45.049139   43671 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 00:28:45.049145   43671 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 00:28:45.049154   43671 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 00:28:45.049163   43671 command_runner.go:130] > # read_only = false
	I0914 00:28:45.049174   43671 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 00:28:45.049187   43671 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 00:28:45.049197   43671 command_runner.go:130] > # live configuration reload.
	I0914 00:28:45.049206   43671 command_runner.go:130] > # log_level = "info"
	I0914 00:28:45.049216   43671 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 00:28:45.049227   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.049237   43671 command_runner.go:130] > # log_filter = ""
	I0914 00:28:45.049247   43671 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 00:28:45.049265   43671 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 00:28:45.049274   43671 command_runner.go:130] > # separated by comma.
	I0914 00:28:45.049286   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049295   43671 command_runner.go:130] > # uid_mappings = ""
	I0914 00:28:45.049305   43671 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 00:28:45.049318   43671 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 00:28:45.049327   43671 command_runner.go:130] > # separated by comma.
	I0914 00:28:45.049341   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049350   43671 command_runner.go:130] > # gid_mappings = ""
	I0914 00:28:45.049361   43671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 00:28:45.049375   43671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 00:28:45.049390   43671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 00:28:45.049406   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049417   43671 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 00:28:45.049428   43671 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 00:28:45.049441   43671 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 00:28:45.049459   43671 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 00:28:45.049474   43671 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 00:28:45.049484   43671 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 00:28:45.049497   43671 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 00:28:45.049516   43671 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 00:28:45.049529   43671 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 00:28:45.049538   43671 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 00:28:45.049548   43671 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 00:28:45.049560   43671 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 00:28:45.049571   43671 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 00:28:45.049582   43671 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 00:28:45.049592   43671 command_runner.go:130] > drop_infra_ctr = false
	I0914 00:28:45.049604   43671 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 00:28:45.049616   43671 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 00:28:45.049629   43671 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 00:28:45.049639   43671 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 00:28:45.049653   43671 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0914 00:28:45.049664   43671 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0914 00:28:45.049674   43671 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0914 00:28:45.049687   43671 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0914 00:28:45.049696   43671 command_runner.go:130] > # shared_cpuset = ""
	I0914 00:28:45.049708   43671 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 00:28:45.049719   43671 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 00:28:45.049729   43671 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 00:28:45.049744   43671 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 00:28:45.049756   43671 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 00:28:45.049767   43671 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0914 00:28:45.049777   43671 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0914 00:28:45.049786   43671 command_runner.go:130] > # enable_criu_support = false
	I0914 00:28:45.049795   43671 command_runner.go:130] > # Enable/disable the generation of the container,
	I0914 00:28:45.049810   43671 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0914 00:28:45.049821   43671 command_runner.go:130] > # enable_pod_events = false
	I0914 00:28:45.049834   43671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 00:28:45.049847   43671 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 00:28:45.049858   43671 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0914 00:28:45.049866   43671 command_runner.go:130] > # default_runtime = "runc"
	I0914 00:28:45.049878   43671 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 00:28:45.049896   43671 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 00:28:45.049914   43671 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0914 00:28:45.049925   43671 command_runner.go:130] > # creation as a file is not desired either.
	I0914 00:28:45.049939   43671 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 00:28:45.049949   43671 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 00:28:45.049958   43671 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 00:28:45.049965   43671 command_runner.go:130] > # ]
	I0914 00:28:45.049977   43671 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 00:28:45.049990   43671 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 00:28:45.050002   43671 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0914 00:28:45.050013   43671 command_runner.go:130] > # Each entry in the table should follow the format:
	I0914 00:28:45.050021   43671 command_runner.go:130] > #
	I0914 00:28:45.050030   43671 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0914 00:28:45.050040   43671 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0914 00:28:45.050090   43671 command_runner.go:130] > # runtime_type = "oci"
	I0914 00:28:45.050099   43671 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0914 00:28:45.050107   43671 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0914 00:28:45.050115   43671 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0914 00:28:45.050125   43671 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0914 00:28:45.050133   43671 command_runner.go:130] > # monitor_env = []
	I0914 00:28:45.050143   43671 command_runner.go:130] > # privileged_without_host_devices = false
	I0914 00:28:45.050153   43671 command_runner.go:130] > # allowed_annotations = []
	I0914 00:28:45.050162   43671 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0914 00:28:45.050170   43671 command_runner.go:130] > # Where:
	I0914 00:28:45.050179   43671 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0914 00:28:45.050193   43671 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0914 00:28:45.050206   43671 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 00:28:45.050219   43671 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 00:28:45.050234   43671 command_runner.go:130] > #   in $PATH.
	I0914 00:28:45.050248   43671 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0914 00:28:45.050259   43671 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 00:28:45.050287   43671 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0914 00:28:45.050296   43671 command_runner.go:130] > #   state.
	I0914 00:28:45.050313   43671 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 00:28:45.050325   43671 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 00:28:45.050339   43671 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 00:28:45.050350   43671 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 00:28:45.050364   43671 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 00:28:45.050377   43671 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 00:28:45.050387   43671 command_runner.go:130] > #   The currently recognized values are:
	I0914 00:28:45.050397   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 00:28:45.050412   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 00:28:45.050425   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 00:28:45.050437   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 00:28:45.050452   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 00:28:45.050465   43671 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 00:28:45.050479   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0914 00:28:45.050489   43671 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0914 00:28:45.050501   43671 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 00:28:45.050514   43671 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0914 00:28:45.050522   43671 command_runner.go:130] > #   deprecated option "conmon".
	I0914 00:28:45.050535   43671 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0914 00:28:45.050547   43671 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0914 00:28:45.050561   43671 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0914 00:28:45.050572   43671 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 00:28:45.050586   43671 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0914 00:28:45.050597   43671 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0914 00:28:45.050615   43671 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0914 00:28:45.050627   43671 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0914 00:28:45.050635   43671 command_runner.go:130] > #
	I0914 00:28:45.050643   43671 command_runner.go:130] > # Using the seccomp notifier feature:
	I0914 00:28:45.050649   43671 command_runner.go:130] > #
	I0914 00:28:45.050659   43671 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0914 00:28:45.050672   43671 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0914 00:28:45.050683   43671 command_runner.go:130] > #
	I0914 00:28:45.050694   43671 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0914 00:28:45.050714   43671 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0914 00:28:45.050722   43671 command_runner.go:130] > #
	I0914 00:28:45.050732   43671 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0914 00:28:45.050740   43671 command_runner.go:130] > # feature.
	I0914 00:28:45.050746   43671 command_runner.go:130] > #
	I0914 00:28:45.050762   43671 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0914 00:28:45.050774   43671 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0914 00:28:45.050788   43671 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0914 00:28:45.050801   43671 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0914 00:28:45.050814   43671 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0914 00:28:45.050821   43671 command_runner.go:130] > #
	I0914 00:28:45.050832   43671 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0914 00:28:45.050846   43671 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0914 00:28:45.050853   43671 command_runner.go:130] > #
	I0914 00:28:45.050864   43671 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0914 00:28:45.050876   43671 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0914 00:28:45.050884   43671 command_runner.go:130] > #
	I0914 00:28:45.050894   43671 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0914 00:28:45.050906   43671 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0914 00:28:45.050912   43671 command_runner.go:130] > # limitation.
	I0914 00:28:45.050923   43671 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 00:28:45.050933   43671 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 00:28:45.050940   43671 command_runner.go:130] > runtime_type = "oci"
	I0914 00:28:45.050950   43671 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 00:28:45.050959   43671 command_runner.go:130] > runtime_config_path = ""
	I0914 00:28:45.050968   43671 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0914 00:28:45.050977   43671 command_runner.go:130] > monitor_cgroup = "pod"
	I0914 00:28:45.050985   43671 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 00:28:45.050993   43671 command_runner.go:130] > monitor_env = [
	I0914 00:28:45.051002   43671 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 00:28:45.051009   43671 command_runner.go:130] > ]
	I0914 00:28:45.051017   43671 command_runner.go:130] > privileged_without_host_devices = false
	I0914 00:28:45.051029   43671 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 00:28:45.051046   43671 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 00:28:45.051059   43671 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 00:28:45.051075   43671 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 00:28:45.051091   43671 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 00:28:45.051103   43671 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 00:28:45.051121   43671 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 00:28:45.051136   43671 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 00:28:45.051149   43671 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 00:28:45.051163   43671 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 00:28:45.051171   43671 command_runner.go:130] > # Example:
	I0914 00:28:45.051179   43671 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 00:28:45.051190   43671 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 00:28:45.051199   43671 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 00:28:45.051211   43671 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 00:28:45.051219   43671 command_runner.go:130] > # cpuset = 0
	I0914 00:28:45.051227   43671 command_runner.go:130] > # cpushares = "0-1"
	I0914 00:28:45.051235   43671 command_runner.go:130] > # Where:
	I0914 00:28:45.051243   43671 command_runner.go:130] > # The workload name is workload-type.
	I0914 00:28:45.051258   43671 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 00:28:45.051272   43671 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 00:28:45.051284   43671 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 00:28:45.051300   43671 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 00:28:45.051312   43671 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 00:28:45.051324   43671 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0914 00:28:45.051338   43671 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0914 00:28:45.051347   43671 command_runner.go:130] > # Default value is set to true
	I0914 00:28:45.051357   43671 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0914 00:28:45.051370   43671 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0914 00:28:45.051380   43671 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0914 00:28:45.051389   43671 command_runner.go:130] > # Default value is set to 'false'
	I0914 00:28:45.051399   43671 command_runner.go:130] > # disable_hostport_mapping = false
	I0914 00:28:45.051410   43671 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 00:28:45.051418   43671 command_runner.go:130] > #
	I0914 00:28:45.051433   43671 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 00:28:45.051445   43671 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 00:28:45.051457   43671 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 00:28:45.051466   43671 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 00:28:45.051472   43671 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 00:28:45.051479   43671 command_runner.go:130] > [crio.image]
	I0914 00:28:45.051494   43671 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 00:28:45.051501   43671 command_runner.go:130] > # default_transport = "docker://"
	I0914 00:28:45.051513   43671 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 00:28:45.051523   43671 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 00:28:45.051529   43671 command_runner.go:130] > # global_auth_file = ""
	I0914 00:28:45.051537   43671 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 00:28:45.051546   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.051554   43671 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0914 00:28:45.051564   43671 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 00:28:45.051573   43671 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 00:28:45.051581   43671 command_runner.go:130] > # This option supports live configuration reload.
	I0914 00:28:45.051588   43671 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 00:28:45.051598   43671 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 00:28:45.051607   43671 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 00:28:45.051617   43671 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 00:28:45.051625   43671 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 00:28:45.051633   43671 command_runner.go:130] > # pause_command = "/pause"
	I0914 00:28:45.051642   43671 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0914 00:28:45.051651   43671 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0914 00:28:45.051660   43671 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0914 00:28:45.051670   43671 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0914 00:28:45.051681   43671 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0914 00:28:45.051694   43671 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0914 00:28:45.051704   43671 command_runner.go:130] > # pinned_images = [
	I0914 00:28:45.051710   43671 command_runner.go:130] > # ]
	I0914 00:28:45.051721   43671 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 00:28:45.051734   43671 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 00:28:45.051758   43671 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 00:28:45.051771   43671 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 00:28:45.051798   43671 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 00:28:45.051808   43671 command_runner.go:130] > # signature_policy = ""
	I0914 00:28:45.051818   43671 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0914 00:28:45.051834   43671 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0914 00:28:45.051847   43671 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0914 00:28:45.051860   43671 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0914 00:28:45.051873   43671 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0914 00:28:45.051889   43671 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0914 00:28:45.051904   43671 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 00:28:45.051917   43671 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 00:28:45.051926   43671 command_runner.go:130] > # changing them here.
	I0914 00:28:45.051935   43671 command_runner.go:130] > # insecure_registries = [
	I0914 00:28:45.051942   43671 command_runner.go:130] > # ]
	I0914 00:28:45.051954   43671 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 00:28:45.051965   43671 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 00:28:45.051973   43671 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 00:28:45.051988   43671 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 00:28:45.051996   43671 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 00:28:45.052008   43671 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 00:28:45.052016   43671 command_runner.go:130] > # CNI plugins.
	I0914 00:28:45.052022   43671 command_runner.go:130] > [crio.network]
	I0914 00:28:45.052035   43671 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 00:28:45.052047   43671 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 00:28:45.052057   43671 command_runner.go:130] > # cni_default_network = ""
	I0914 00:28:45.052068   43671 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 00:28:45.052077   43671 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 00:28:45.052088   43671 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 00:28:45.052097   43671 command_runner.go:130] > # plugin_dirs = [
	I0914 00:28:45.052106   43671 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 00:28:45.052113   43671 command_runner.go:130] > # ]
	I0914 00:28:45.052123   43671 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 00:28:45.052141   43671 command_runner.go:130] > [crio.metrics]
	I0914 00:28:45.052151   43671 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 00:28:45.052158   43671 command_runner.go:130] > enable_metrics = true
	I0914 00:28:45.052169   43671 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 00:28:45.052178   43671 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 00:28:45.052191   43671 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 00:28:45.052204   43671 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 00:28:45.052216   43671 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 00:28:45.052225   43671 command_runner.go:130] > # metrics_collectors = [
	I0914 00:28:45.052233   43671 command_runner.go:130] > # 	"operations",
	I0914 00:28:45.052243   43671 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 00:28:45.052251   43671 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 00:28:45.052265   43671 command_runner.go:130] > # 	"operations_errors",
	I0914 00:28:45.052274   43671 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 00:28:45.052282   43671 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 00:28:45.052292   43671 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 00:28:45.052301   43671 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 00:28:45.052309   43671 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 00:28:45.052322   43671 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 00:28:45.052332   43671 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 00:28:45.052341   43671 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0914 00:28:45.052353   43671 command_runner.go:130] > # 	"containers_oom_total",
	I0914 00:28:45.052363   43671 command_runner.go:130] > # 	"containers_oom",
	I0914 00:28:45.052373   43671 command_runner.go:130] > # 	"processes_defunct",
	I0914 00:28:45.052380   43671 command_runner.go:130] > # 	"operations_total",
	I0914 00:28:45.052389   43671 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 00:28:45.052397   43671 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 00:28:45.052407   43671 command_runner.go:130] > # 	"operations_errors_total",
	I0914 00:28:45.052418   43671 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 00:28:45.052428   43671 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 00:28:45.052437   43671 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 00:28:45.052446   43671 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 00:28:45.052454   43671 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 00:28:45.052468   43671 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 00:28:45.052478   43671 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0914 00:28:45.052488   43671 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0914 00:28:45.052494   43671 command_runner.go:130] > # ]
	I0914 00:28:45.052505   43671 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 00:28:45.052514   43671 command_runner.go:130] > # metrics_port = 9090
	I0914 00:28:45.052523   43671 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 00:28:45.052533   43671 command_runner.go:130] > # metrics_socket = ""
	I0914 00:28:45.052542   43671 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 00:28:45.052555   43671 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 00:28:45.052569   43671 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 00:28:45.052579   43671 command_runner.go:130] > # certificate on any modification event.
	I0914 00:28:45.052587   43671 command_runner.go:130] > # metrics_cert = ""
	I0914 00:28:45.052598   43671 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 00:28:45.052609   43671 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 00:28:45.052619   43671 command_runner.go:130] > # metrics_key = ""
	I0914 00:28:45.052630   43671 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 00:28:45.052639   43671 command_runner.go:130] > [crio.tracing]
	I0914 00:28:45.052649   43671 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 00:28:45.052658   43671 command_runner.go:130] > # enable_tracing = false
	I0914 00:28:45.052668   43671 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 00:28:45.052677   43671 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 00:28:45.052688   43671 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0914 00:28:45.052699   43671 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 00:28:45.052709   43671 command_runner.go:130] > # CRI-O NRI configuration.
	I0914 00:28:45.052715   43671 command_runner.go:130] > [crio.nri]
	I0914 00:28:45.052726   43671 command_runner.go:130] > # Globally enable or disable NRI.
	I0914 00:28:45.052735   43671 command_runner.go:130] > # enable_nri = false
	I0914 00:28:45.052743   43671 command_runner.go:130] > # NRI socket to listen on.
	I0914 00:28:45.052753   43671 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0914 00:28:45.052762   43671 command_runner.go:130] > # NRI plugin directory to use.
	I0914 00:28:45.052771   43671 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0914 00:28:45.052788   43671 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0914 00:28:45.052805   43671 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0914 00:28:45.052818   43671 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0914 00:28:45.052828   43671 command_runner.go:130] > # nri_disable_connections = false
	I0914 00:28:45.052840   43671 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0914 00:28:45.052848   43671 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0914 00:28:45.052860   43671 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0914 00:28:45.052871   43671 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0914 00:28:45.052884   43671 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 00:28:45.052892   43671 command_runner.go:130] > [crio.stats]
	I0914 00:28:45.052902   43671 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 00:28:45.052914   43671 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 00:28:45.052924   43671 command_runner.go:130] > # stats_collection_period = 0
	I0914 00:28:45.052971   43671 command_runner.go:130] ! time="2024-09-14 00:28:45.003150630Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0914 00:28:45.052990   43671 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 00:28:45.053084   43671 cni.go:84] Creating CNI manager for ""
	I0914 00:28:45.053097   43671 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 00:28:45.053110   43671 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:28:45.053136   43671 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-209237 NodeName:multinode-209237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:28:45.053313   43671 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-209237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:28:45.053393   43671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:28:45.064326   43671 command_runner.go:130] > kubeadm
	I0914 00:28:45.064354   43671 command_runner.go:130] > kubectl
	I0914 00:28:45.064360   43671 command_runner.go:130] > kubelet
	I0914 00:28:45.064426   43671 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:28:45.064509   43671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:28:45.074839   43671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 00:28:45.092228   43671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:28:45.109613   43671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0914 00:28:45.125964   43671 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0914 00:28:45.129806   43671 command_runner.go:130] > 192.168.39.214	control-plane.minikube.internal
	I0914 00:28:45.129875   43671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:28:45.276541   43671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:28:45.291617   43671 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237 for IP: 192.168.39.214
	I0914 00:28:45.291644   43671 certs.go:194] generating shared ca certs ...
	I0914 00:28:45.291665   43671 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:28:45.291838   43671 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:28:45.291901   43671 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:28:45.291915   43671 certs.go:256] generating profile certs ...
	I0914 00:28:45.292013   43671 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/client.key
	I0914 00:28:45.292084   43671 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key.25f22b36
	I0914 00:28:45.292145   43671 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key
	I0914 00:28:45.292160   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 00:28:45.292190   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 00:28:45.292208   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 00:28:45.292226   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 00:28:45.292244   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 00:28:45.292263   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 00:28:45.292282   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 00:28:45.292307   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 00:28:45.292370   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:28:45.292411   43671 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:28:45.292424   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:28:45.292468   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:28:45.292524   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:28:45.292558   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:28:45.292615   43671 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:28:45.292658   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem -> /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.292677   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.292696   43671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.294635   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:28:45.318825   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:28:45.342423   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:28:45.365761   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:28:45.388625   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 00:28:45.411430   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:28:45.434482   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:28:45.457275   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/multinode-209237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 00:28:45.480224   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:28:45.502320   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:28:45.527973   43671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:28:45.551905   43671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:28:45.567672   43671 ssh_runner.go:195] Run: openssl version
	I0914 00:28:45.573101   43671 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0914 00:28:45.573230   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:28:45.583623   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587665   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587704   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.587748   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:28:45.592866   43671 command_runner.go:130] > b5213941
	I0914 00:28:45.593012   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:28:45.601877   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:28:45.612222   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616741   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616768   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.616805   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:28:45.622294   43671 command_runner.go:130] > 51391683
	I0914 00:28:45.622347   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:28:45.632068   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:28:45.642541   43671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647291   43671 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647324   43671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.647377   43671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:28:45.653359   43671 command_runner.go:130] > 3ec20f2e
	I0914 00:28:45.653442   43671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:28:45.662934   43671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:28:45.667253   43671 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:28:45.667292   43671 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0914 00:28:45.667300   43671 command_runner.go:130] > Device: 253,1	Inode: 4195880     Links: 1
	I0914 00:28:45.667309   43671 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 00:28:45.667318   43671 command_runner.go:130] > Access: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667326   43671 command_runner.go:130] > Modify: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667333   43671 command_runner.go:130] > Change: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667341   43671 command_runner.go:130] >  Birth: 2024-09-14 00:22:05.946932871 +0000
	I0914 00:28:45.667420   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:28:45.672888   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.673064   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:28:45.678501   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.678586   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:28:45.683817   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.683931   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:28:45.689209   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.689406   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:28:45.694698   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.694767   43671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:28:45.700073   43671 command_runner.go:130] > Certificate will not expire
	I0914 00:28:45.700154   43671 kubeadm.go:392] StartCluster: {Name:multinode-209237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-209237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:28:45.700256   43671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:28:45.700320   43671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:28:45.735141   43671 command_runner.go:130] > 317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7
	I0914 00:28:45.735170   43671 command_runner.go:130] > 7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c
	I0914 00:28:45.735179   43671 command_runner.go:130] > 8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca
	I0914 00:28:45.735190   43671 command_runner.go:130] > f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6
	I0914 00:28:45.735197   43671 command_runner.go:130] > 374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0
	I0914 00:28:45.735206   43671 command_runner.go:130] > 03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a
	I0914 00:28:45.735216   43671 command_runner.go:130] > cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0
	I0914 00:28:45.735227   43671 command_runner.go:130] > 84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2
	I0914 00:28:45.738589   43671 cri.go:89] found id: "317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7"
	I0914 00:28:45.738610   43671 cri.go:89] found id: "7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c"
	I0914 00:28:45.738614   43671 cri.go:89] found id: "8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca"
	I0914 00:28:45.738617   43671 cri.go:89] found id: "f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6"
	I0914 00:28:45.738619   43671 cri.go:89] found id: "374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0"
	I0914 00:28:45.738622   43671 cri.go:89] found id: "03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a"
	I0914 00:28:45.738625   43671 cri.go:89] found id: "cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0"
	I0914 00:28:45.738627   43671 cri.go:89] found id: "84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2"
	I0914 00:28:45.738629   43671 cri.go:89] found id: ""
	I0914 00:28:45.738671   43671 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.678075816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5407bf3b-1574-4d53-b102-ae3ed7fff626 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.684986220Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2292fe1a-db2a-44dc-a374-ab9a39b7ddc9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.686287666Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-956wv,Uid:d188d3b8-bd67-4381-be40-70ea7e88d809,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273766843374728,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704711630Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-svdnx,Uid:ff82006d-cb22-4180-9740-454f158c2f25,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726273733136087225,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704707656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&PodSandboxMetadata{Name:kindnet-q25jz,Uid:0b1e5199-8d9b-449c-868c-4c2ae8215936,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733064727958,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704710467Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53dc5e6a-ac47-4181-9a30-96faeff841b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733062959759,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T00:28:52.704706263Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&PodSandboxMetadata{Name:kube-proxy-b9vxj,Uid:5485377f-3371-44f1-9d25-d4fc9c87e7e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733061034315,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704704769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-209237,Uid:82a0f57f68d2ea01e945d218ac798055,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728263512563,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82a0f57f68d2ea01e945d218ac798055,kubernetes.io/config.seen: 2024-09-14T00:28:47.722098453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&PodSandboxMetadat
a{Name:etcd-multinode-209237,Uid:c92b5a4000ae755fa3f55ca0633d7626,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728250050732,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: c92b5a4000ae755fa3f55ca0633d7626,kubernetes.io/config.seen: 2024-09-14T00:28:47.722103988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-209237,Uid:c35437f7ada12fed26bb13b8e7897ac7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728246048151,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.214:8443,kubernetes.io/config.hash: c35437f7ada12fed26bb13b8e7897ac7,kubernetes.io/config.seen: 2024-09-14T00:28:47.722105664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-209237,Uid:1112e0e9df8e98ef0757c4dbc4c653f9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728245399820,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 1112e0e9df8e98ef0757c4dbc4c653f9,kubernetes.io/config.seen: 2024-09-14T00:28:47.722101945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-956wv,Uid:d188d3b8-bd67-4381-be40-70ea7e88d809,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273407156844213,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:23:26.838708888Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53dc5e6a-ac47-4181-9a30-96faeff841b7,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1726273352283652717,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T00:22:31.970081393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-svdnx,Uid:ff82006d-cb22-4180-9740-454f158c2f25,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273352279234941,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:31.963128483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&PodSandboxMetadata{Name:kindnet-q25jz,Uid:0b1e5199-8d9b-449c-868c-4c2ae8215936,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273340043879986,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:18.832390621Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&PodSandboxMetadata{Name:kube-proxy-b9vxj,Uid:5485377f-3371-44f1-9d25-d4fc9c87e7e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273340042926029,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:22:18.837436333Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-209237,Uid:c92b5a4000ae755fa3f55ca0633d7626,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329123996468,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: c92b5a4000ae755fa3f55ca0633d7626,kubernetes.io/config.seen: 2024-09-14T00:22:08.644841068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e053
9c553a6d05e5c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-209237,Uid:1112e0e9df8e98ef0757c4dbc4c653f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329122911118,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1112e0e9df8e98ef0757c4dbc4c653f9,kubernetes.io/config.seen: 2024-09-14T00:22:08.644848274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-209237,Uid:82a0f57f68d2ea01e945d218ac798055,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329122185059,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82a0f57f68d2ea01e945d218ac798055,kubernetes.io/config.seen: 2024-09-14T00:22:08.644847187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-209237,Uid:c35437f7ada12fed26bb13b8e7897ac7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726273329098201767,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
: 192.168.39.214:8443,kubernetes.io/config.hash: c35437f7ada12fed26bb13b8e7897ac7,kubernetes.io/config.seen: 2024-09-14T00:22:08.644845619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2292fe1a-db2a-44dc-a374-ab9a39b7ddc9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.687241393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15422d44-ca13-49b6-a2c9-41090b3ae72f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.687296258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15422d44-ca13-49b6-a2c9-41090b3ae72f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.687659428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15422d44-ca13-49b6-a2c9-41090b3ae72f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.721323673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29fc55ec-a98a-4e9d-8d82-d59f48417c9f name=/runtime.v1.RuntimeService/Version
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.721438434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29fc55ec-a98a-4e9d-8d82-d59f48417c9f name=/runtime.v1.RuntimeService/Version
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.722410145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96cb4ce0-8d1e-4583-a2af-7d132c528651 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.722870923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273975722842689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96cb4ce0-8d1e-4583-a2af-7d132c528651 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.723815420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc325c74-e16e-4d99-9c9a-a187af0fe1a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.723942233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc325c74-e16e-4d99-9c9a-a187af0fe1a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.724679813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc325c74-e16e-4d99-9c9a-a187af0fe1a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.766379962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df9e559b-51de-407c-b2e0-2f9ad13ac339 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.766451909Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df9e559b-51de-407c-b2e0-2f9ad13ac339 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.768025519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5442208-0c57-4c77-acdf-6185b15dfc1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.768432857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273975768409627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5442208-0c57-4c77-acdf-6185b15dfc1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.770344334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5302a99-d76e-4fea-a88e-70826384001e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.770411222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5302a99-d76e-4fea-a88e-70826384001e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.773973437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b761e51b938353e34a10c867241f286df927bd18bd54d12825b74cff37db99,PodSandboxId:ff0de3b34b96de9b30c365726757d87c5475cccd1c4cf2e661a0cb840cad4670,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726273410326585441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b97935c57b90a2fb0a16f1994a5b86101c6de630bf1bcc054ab73c76a4ba08c,PodSandboxId:6c69c0c0a87d561493e2a623c0ada197922ad6a31700a3d6e4b18b30bcad8b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726273352451062888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7,PodSandboxId:74cdd0432b856a8c4e787f040c5ea44d49d6239fc59d40127e70ec50e1e74df3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726273352456580068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca,PodSandboxId:16f47f89e20eb32f88cc7533f225a875833f4c166050261f75d231ee44e70a2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726273340352420787,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6,PodSandboxId:5902218c21491f0ff59abdef1a93f4757890335e823ab640bec0ea0213cfad28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726273340161002965,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25
-d4fc9c87e7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0,PodSandboxId:ea20f7e0244877b7d1f1432d2be039e3c90c38beb41db4a569d68028709d7096,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726273329367996361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57
f68d2ea01e945d218ac798055,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a,PodSandboxId:1f7cefd0b83fb2e4d36a3b7ce051cb007748db02dc7cebe0d967d7b7806a6d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726273329324361418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0,PodSandboxId:720c62f82c6bcc6a8e0f9370ad2963f9e290167a8072308e0539c553a6d05e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726273329321089952,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2,PodSandboxId:5bce7f8fcba8796d5fc229f62bbf4a20ad613be6777f6252e22f214fc6fc0e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726273329234210486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5302a99-d76e-4fea-a88e-70826384001e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.791246435Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d37c3f6-670c-46ce-a44b-24504c8ba65e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.791491388Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-956wv,Uid:d188d3b8-bd67-4381-be40-70ea7e88d809,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273766843374728,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704711630Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-svdnx,Uid:ff82006d-cb22-4180-9740-454f158c2f25,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726273733136087225,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704707656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&PodSandboxMetadata{Name:kindnet-q25jz,Uid:0b1e5199-8d9b-449c-868c-4c2ae8215936,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733064727958,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704710467Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53dc5e6a-ac47-4181-9a30-96faeff841b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733062959759,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T00:28:52.704706263Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&PodSandboxMetadata{Name:kube-proxy-b9vxj,Uid:5485377f-3371-44f1-9d25-d4fc9c87e7e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273733061034315,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:28:52.704704769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-209237,Uid:82a0f57f68d2ea01e945d218ac798055,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728263512563,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82a0f57f68d2ea01e945d218ac798055,kubernetes.io/config.seen: 2024-09-14T00:28:47.722098453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&PodSandboxMetadat
a{Name:etcd-multinode-209237,Uid:c92b5a4000ae755fa3f55ca0633d7626,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728250050732,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: c92b5a4000ae755fa3f55ca0633d7626,kubernetes.io/config.seen: 2024-09-14T00:28:47.722103988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-209237,Uid:c35437f7ada12fed26bb13b8e7897ac7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728246048151,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.214:8443,kubernetes.io/config.hash: c35437f7ada12fed26bb13b8e7897ac7,kubernetes.io/config.seen: 2024-09-14T00:28:47.722105664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-209237,Uid:1112e0e9df8e98ef0757c4dbc4c653f9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726273728245399820,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 1112e0e9df8e98ef0757c4dbc4c653f9,kubernetes.io/config.seen: 2024-09-14T00:28:47.722101945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0d37c3f6-670c-46ce-a44b-24504c8ba65e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.792371828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c7c6e28-5f29-4c9c-b8b4-61cf0e975ef7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.792442415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c7c6e28-5f29-4c9c-b8b4-61cf0e975ef7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:32:55 multinode-209237 crio[2695]: time="2024-09-14 00:32:55.792717631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eab058a21cc8f410162a112956127dea2e2d93ba9297ca828c0fae8b780b2496,PodSandboxId:7a9ca3cb79a28abdfb60c6c901337c413e229550e48f60d715f950f8e821c07a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726273766969154587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-956wv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d188d3b8-bd67-4381-be40-70ea7e88d809,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d,PodSandboxId:d4da591ab46687bcec20a89368084c66f70e98beb26d7177afa63315021fdd6a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726273733518013045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-q25jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1e5199-8d9b-449c-868c-4c2ae8215936,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd,PodSandboxId:137000f4fa16b11c47c1c9a4528d6ed9e5faf8227225ea483a9d25968f7de673,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726273733390642885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-svdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82006d-cb22-4180-9740-454f158c2f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1,PodSandboxId:189fda91fd94b493cd4faabc298e3db3ce5db1eb9188f55a67abfd324ab3f547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726273733307957858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9vxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5485377f-3371-44f1-9d25-d4fc9c87e7e9,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad99ff59e6f3b3fb2cd5da919ccfbbe60b6c7ca2adb509e26d613bd9081b69,PodSandboxId:19b7ab54ded9a603b3ac0927303ed00ce75b416e06e4d5645987c81ae172a996,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726273733275517447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53dc5e6a-ac47-4181-9a30-96faeff841b7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c,PodSandboxId:183e8e40e69dc01ef8c725abb4f7d05c7420c2482f84521cc50638574b1b4c9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726273728504459953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a0f57f68d2ea01e945d218ac798055,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9,PodSandboxId:78472bed701f138acc676d762cb55707308cbc996659607a2aced77bcbc209f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726273728465379498,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35437f7ada12fed26bb13b8e7897ac7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd,PodSandboxId:f0975fe907998b75003d24b290db093694d0471c30180f60b388a8f8d23f981b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726273728465915849,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92b5a4000ae755fa3f55ca0633d7626,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d,PodSandboxId:7550692aa5380f15efbd6e093533aa83629b35d8dd8df54b35f308e182df24a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726273728419835629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-209237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1112e0e9df8e98ef0757c4dbc4c653f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c7c6e28-5f29-4c9c-b8b4-61cf0e975ef7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eab058a21cc8f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   7a9ca3cb79a28       busybox-7dff88458-956wv
	6031bffda9a2f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   d4da591ab4668       kindnet-q25jz
	cb952776322fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   137000f4fa16b       coredns-7c65d6cfc9-svdnx
	c693ef0e7b777       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   189fda91fd94b       kube-proxy-b9vxj
	58ad99ff59e6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   19b7ab54ded9a       storage-provisioner
	0ba391e8a5aad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   183e8e40e69dc       kube-controller-manager-multinode-209237
	b72ce42c87cea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   f0975fe907998       etcd-multinode-209237
	81cdf784a468e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   78472bed701f1       kube-apiserver-multinode-209237
	b91527355f6ed       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   7550692aa5380       kube-scheduler-multinode-209237
	67b761e51b938       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   ff0de3b34b96d       busybox-7dff88458-956wv
	317b9e570ba23       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   74cdd0432b856       coredns-7c65d6cfc9-svdnx
	7b97935c57b90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   6c69c0c0a87d5       storage-provisioner
	8e2b4c92c6869       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   16f47f89e20eb       kindnet-q25jz
	f8fe88c904818       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   5902218c21491       kube-proxy-b9vxj
	374870699ff0a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   ea20f7e024487       kube-controller-manager-multinode-209237
	03bcf16a526d9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   1f7cefd0b83fb       etcd-multinode-209237
	cc34260f15554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   720c62f82c6bc       kube-scheduler-multinode-209237
	84997aaf1d8b5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   5bce7f8fcba87       kube-apiserver-multinode-209237
	
	
	==> coredns [317b9e570ba23b99e754f89ba6df144e9b2c7e959f95012df3dd940979a1dcf7] <==
	[INFO] 10.244.1.2:54390 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640334s
	[INFO] 10.244.1.2:38994 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099861s
	[INFO] 10.244.1.2:58586 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093077s
	[INFO] 10.244.1.2:49292 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236217s
	[INFO] 10.244.1.2:42846 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104296s
	[INFO] 10.244.1.2:54669 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139418s
	[INFO] 10.244.1.2:57229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009637s
	[INFO] 10.244.0.3:53187 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097746s
	[INFO] 10.244.0.3:43993 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059835s
	[INFO] 10.244.0.3:47338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049723s
	[INFO] 10.244.0.3:55121 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043808s
	[INFO] 10.244.1.2:44308 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129689s
	[INFO] 10.244.1.2:51773 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115579s
	[INFO] 10.244.1.2:59177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081764s
	[INFO] 10.244.1.2:58712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126592s
	[INFO] 10.244.0.3:45372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181947s
	[INFO] 10.244.0.3:33077 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175651s
	[INFO] 10.244.0.3:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153705s
	[INFO] 10.244.0.3:50590 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101787s
	[INFO] 10.244.1.2:45483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162049s
	[INFO] 10.244.1.2:40517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143862s
	[INFO] 10.244.1.2:43378 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084981s
	[INFO] 10.244.1.2:37454 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077679s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb952776322fce28c1023f0759fd80197a2ba6370b11a29de16c36b26afa9fdd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32948 - 37710 "HINFO IN 5518691668570056764.2722245426264500041. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014399104s
	
	
	==> describe nodes <==
	Name:               multinode-209237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=multinode-209237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_22_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209237
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:32:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:28:52 +0000   Sat, 14 Sep 2024 00:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    multinode-209237
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64f2ce3c14ee4a9f95871f538c56db8d
	  System UUID:                64f2ce3c-14ee-4a9f-9587-1f538c56db8d
	  Boot ID:                    16cc41bb-1ddb-422a-b746-d57940c85259
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-956wv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 coredns-7c65d6cfc9-svdnx                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-209237                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-q25jz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-209237             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-209237    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-b9vxj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-209237             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-209237 event: Registered Node multinode-209237 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-209237 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-209237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-209237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-209237 event: Registered Node multinode-209237 in Controller
	
	
	Name:               multinode-209237-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209237-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=multinode-209237
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T00_29_34_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:29:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209237-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:30:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:31:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:31:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:31:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 00:30:04 +0000   Sat, 14 Sep 2024 00:31:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-209237-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbe3ca9578d2450f8368ddf16293a2eb
	  System UUID:                cbe3ca95-78d2-450f-8368-ddf16293a2eb
	  Boot ID:                    7235c0df-e2b7-4425-ad33-af70beb280f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzw2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kindnet-xmgm2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m53s
	  kube-system                 kube-proxy-pddlw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m53s (x2 over 9m53s)  kubelet          Node multinode-209237-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s (x2 over 9m53s)  kubelet          Node multinode-209237-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s (x2 over 9m53s)  kubelet          Node multinode-209237-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-209237-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-209237-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-209237-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-209237-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-209237-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-209237-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061865] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.178940] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.117094] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.275249] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Sep14 00:22] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.079969] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.060877] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002313] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.088513] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.100109] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.136910] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.969449] kauditd_printk_skb: 60 callbacks suppressed
	[Sep14 00:23] kauditd_printk_skb: 12 callbacks suppressed
	[Sep14 00:28] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.158472] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.170611] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.141869] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.280593] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.685211] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +2.341461] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +5.641097] kauditd_printk_skb: 184 callbacks suppressed
	[Sep14 00:29] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.084386] systemd-fstab-generator[3744]: Ignoring "noauto" option for root device
	[ +17.919526] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [03bcf16a526d9be6b83c49db3d720fbc9dba55f7800476c0cfa7d731dfa0625a] <==
	{"level":"info","ts":"2024-09-14T00:23:08.556874Z","caller":"traceutil/trace.go:171","msg":"trace[905844528] range","detail":"{range_begin:/registry/minions/multinode-209237-m02; range_end:; response_count:1; response_revision:513; }","duration":"284.418666ms","start":"2024-09-14T00:23:08.272446Z","end":"2024-09-14T00:23:08.556865Z","steps":["trace[905844528] 'agreement among raft nodes before linearized reading'  (duration: 284.27404ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.557004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.067912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:08.557067Z","caller":"traceutil/trace.go:171","msg":"trace[770311732] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:513; }","duration":"305.127353ms","start":"2024-09-14T00:23:08.251928Z","end":"2024-09-14T00:23:08.557056Z","steps":["trace[770311732] 'agreement among raft nodes before linearized reading'  (duration: 305.053198ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.557108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:23:08.251893Z","time spent":"305.204326ms","remote":"127.0.0.1:50052","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-14T00:23:08.557514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T00:23:08.220049Z","time spent":"336.75953ms","remote":"127.0.0.1:50280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2878,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-209237-m02\" mod_revision:504 > success:<request_put:<key:\"/registry/minions/multinode-209237-m02\" value_size:2832 >> failure:<request_range:<key:\"/registry/minions/multinode-209237-m02\" > >"}
	{"level":"info","ts":"2024-09-14T00:23:08.878236Z","caller":"traceutil/trace.go:171","msg":"trace[2033956248] linearizableReadLoop","detail":"{readStateIndex:535; appliedIndex:534; }","duration":"153.029492ms","start":"2024-09-14T00:23:08.725184Z","end":"2024-09-14T00:23:08.878214Z","steps":["trace[2033956248] 'read index received'  (duration: 86.753327ms)","trace[2033956248] 'applied index is now lower than readState.Index'  (duration: 66.275121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:23:08.878406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.195811ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:08.878459Z","caller":"traceutil/trace.go:171","msg":"trace[1875454931] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:513; }","duration":"153.26437ms","start":"2024-09-14T00:23:08.725181Z","end":"2024-09-14T00:23:08.878445Z","steps":["trace[1875454931] 'agreement among raft nodes before linearized reading'  (duration: 153.167522ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:08.878559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.045163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-209237-m02\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-14T00:23:08.878614Z","caller":"traceutil/trace.go:171","msg":"trace[1656493857] range","detail":"{range_begin:/registry/minions/multinode-209237-m02; range_end:; response_count:1; response_revision:513; }","duration":"106.105177ms","start":"2024-09-14T00:23:08.772499Z","end":"2024-09-14T00:23:08.878604Z","steps":["trace[1656493857] 'agreement among raft nodes before linearized reading'  (duration: 106.008888ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:23:57.639574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.852738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6697857521825737153 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-209237-m03.17f4f48b8c226a41\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-209237-m03.17f4f48b8c226a41\" value_size:642 lease:6697857521825736767 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-14T00:23:57.639679Z","caller":"traceutil/trace.go:171","msg":"trace[1071833621] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"110.242348ms","start":"2024-09-14T00:23:57.529426Z","end":"2024-09-14T00:23:57.639669Z","steps":["trace[1071833621] 'read index received'  (duration: 25.954µs)","trace[1071833621] 'applied index is now lower than readState.Index'  (duration: 110.21557ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T00:23:57.639738Z","caller":"traceutil/trace.go:171","msg":"trace[867888670] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"232.448573ms","start":"2024-09-14T00:23:57.407284Z","end":"2024-09-14T00:23:57.639733Z","steps":["trace[867888670] 'process raft request'  (duration: 74.105213ms)","trace[867888670] 'compare'  (duration: 157.748687ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:23:57.640067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.643142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-209237-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:23:57.640151Z","caller":"traceutil/trace.go:171","msg":"trace[1037065280] range","detail":"{range_begin:/registry/minions/multinode-209237-m03; range_end:; response_count:0; response_revision:616; }","duration":"110.733954ms","start":"2024-09-14T00:23:57.529408Z","end":"2024-09-14T00:23:57.640142Z","steps":["trace[1037065280] 'agreement among raft nodes before linearized reading'  (duration: 110.605078ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:27:12.424373Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T00:27:12.424494Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-209237","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"]}
	{"level":"warn","ts":"2024-09-14T00:27:12.424624Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.424712Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.509046Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.214:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:27:12.509139Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.214:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:27:12.509231Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9910392473c15cf3","current-leader-member-id":"9910392473c15cf3"}
	{"level":"info","ts":"2024-09-14T00:27:12.511476Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:27:12.511615Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:27:12.511646Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-209237","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"]}
	
	
	==> etcd [b72ce42c87ceaea1607da2d786416d3a9291e3dac1e8f91c153b6116ef4107fd] <==
	{"level":"info","ts":"2024-09-14T00:28:48.889540Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","added-peer-id":"9910392473c15cf3","added-peer-peer-urls":["https://192.168.39.214:2380"]}
	{"level":"info","ts":"2024-09-14T00:28:48.889668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:28:48.889721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:28:48.897864Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:48.899482Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:28:48.899695Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9910392473c15cf3","initial-advertise-peer-urls":["https://192.168.39.214:2380"],"listen-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:28:48.899731Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:28:48.909273Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:28:48.913809Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-14T00:28:50.715391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgPreVoteResp from 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-14T00:28:50.715542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgVoteResp from 9910392473c15cf3 at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.715587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9910392473c15cf3 elected leader 9910392473c15cf3 at term 3"}
	{"level":"info","ts":"2024-09-14T00:28:50.721190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:28:50.721371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:28:50.721212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9910392473c15cf3","local-member-attributes":"{Name:multinode-209237 ClientURLs:[https://192.168.39.214:2379]}","request-path":"/0/members/9910392473c15cf3/attributes","cluster-id":"437e955a662fe33","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:28:50.722244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:28:50.722873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:28:50.722979Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:50.723436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:28:50.723831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:28:50.724251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.214:2379"}
	
	
	==> kernel <==
	 00:32:56 up 11 min,  0 users,  load average: 0.13, 0.19, 0.12
	Linux multinode-209237 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6031bffda9a2f85539f236b414c65d99b378867ffec33f1f356e73794f5bf32d] <==
	I0914 00:31:54.419009       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:04.419583       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:04.419726       1 main.go:299] handling current node
	I0914 00:32:04.419797       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:04.419837       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:14.423437       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:14.423570       1 main.go:299] handling current node
	I0914 00:32:14.423608       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:14.423626       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:24.418603       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:24.418709       1 main.go:299] handling current node
	I0914 00:32:24.418739       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:24.418813       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:34.425643       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:34.425719       1 main.go:299] handling current node
	I0914 00:32:34.425742       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:34.425748       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:44.426416       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:44.426621       1 main.go:299] handling current node
	I0914 00:32:44.426670       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:44.426694       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:32:54.418904       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:32:54.419057       1 main.go:299] handling current node
	I0914 00:32:54.419105       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:32:54.419129       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [8e2b4c92c68697c6d4617db9d7d7726bc8d8278cade0d086461777b5fe3960ca] <==
	I0914 00:26:31.417395       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:26:41.424683       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:26:41.424813       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:26:41.425005       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:26:41.425027       1 main.go:299] handling current node
	I0914 00:26:41.425047       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:26:41.425052       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:26:51.416060       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:26:51.416192       1 main.go:299] handling current node
	I0914 00:26:51.416245       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:26:51.416256       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:26:51.416430       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:26:51.416451       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:27:01.423184       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:27:01.423289       1 main.go:299] handling current node
	I0914 00:27:01.423321       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:27:01.423340       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:27:01.423543       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:27:01.423846       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	I0914 00:27:11.424861       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0914 00:27:11.424903       1 main.go:299] handling current node
	I0914 00:27:11.424918       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0914 00:27:11.424958       1 main.go:322] Node multinode-209237-m02 has CIDR [10.244.1.0/24] 
	I0914 00:27:11.425078       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0914 00:27:11.425100       1 main.go:322] Node multinode-209237-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [81cdf784a468ec44d62227423e6a3942257bf37a0db8bd9692d61c4efa1681f9] <==
	I0914 00:28:52.005077       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 00:28:52.005367       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 00:28:52.006573       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 00:28:52.006625       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 00:28:52.011153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 00:28:52.011985       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:28:52.014815       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 00:28:52.021942       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 00:28:52.022049       1 aggregator.go:171] initial CRD sync complete...
	I0914 00:28:52.022129       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 00:28:52.022153       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 00:28:52.022175       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:28:52.023035       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 00:28:52.044381       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 00:28:52.067233       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:28:52.067343       1 policy_source.go:224] refreshing policies
	I0914 00:28:52.099907       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:28:52.922100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 00:28:54.333721       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:28:54.459663       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:28:54.473697       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:28:54.535971       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:28:54.543169       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 00:28:55.628531       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 00:28:55.680673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [84997aaf1d8b511440fe99c5455efa7ea353394904fc9de8506902f0b9528cb2] <==
	W0914 00:27:12.442885       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.442937       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.442970       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443023       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443063       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443094       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443126       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.443172       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0914 00:27:12.444743       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc009795e58)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	W0914 00:27:12.450485       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450528       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450558       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450600       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450638       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450664       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450690       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450718       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.450744       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.452615       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.452939       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453031       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453119       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453535       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:27:12.453570       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0914 00:27:12.454344       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [0ba391e8a5aad2bd9f5f04f3d11b4a95fa7de75c1f485b7a44e440808aa14e6c] <==
	I0914 00:30:11.092124       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.2.0/24"]
	I0914 00:30:11.092163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.092322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.101312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.486146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:11.804905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:15.425421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:21.458844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:29.313074       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:30:29.313300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:29.325999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:30.361655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:33.859331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:33.875663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:34.427416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:30:34.427629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:31:15.315384       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6kdl5"
	I0914 00:31:15.349028       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6kdl5"
	I0914 00:31:15.349893       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-96zdq"
	I0914 00:31:15.382296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:31:15.391611       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-96zdq"
	I0914 00:31:15.406346       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:31:15.416835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.251898ms"
	I0914 00:31:15.416912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.484µs"
	I0914 00:31:20.542701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	
	
	==> kube-controller-manager [374870699ff0a782685f3264d53a9cde8bb2ea98fb78040ef81e4de8efc40ae0] <==
	I0914 00:24:47.573210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:24:47.593971       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.3.0/24"]
	I0914 00:24:47.594011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	E0914 00:24:47.604080       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-209237-m03" podCIDRs=["10.244.4.0/24"]
	E0914 00:24:47.604187       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-209237-m03"
	E0914 00:24:47.604266       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-209237-m03': failed to patch node CIDR: Node \"multinode-209237-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 00:24:47.604302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:47.609386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:47.828648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:48.148822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:48.322997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:24:57.966997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:07.138915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:07.139540       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:25:07.147200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:08.256628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.273094       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.275108       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-209237-m02"
	I0914 00:25:48.277432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:25:48.304736       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:25:48.305290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	I0914 00:25:48.354466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.846122ms"
	I0914 00:25:48.354640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.506µs"
	I0914 00:25:53.437182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m03"
	I0914 00:26:03.517707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-209237-m02"
	
	
	==> kube-proxy [c693ef0e7b7777893182f46438bb5047a506acb50fbaae1d63dfcf8fca9ed6d1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:28:53.649249       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:28:53.660234       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0914 00:28:53.660512       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:28:53.691905       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:28:53.691942       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:28:53.691972       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:28:53.694191       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:28:53.694515       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:28:53.694556       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:28:53.696216       1 config.go:199] "Starting service config controller"
	I0914 00:28:53.696292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:28:53.696335       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:28:53.696391       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:28:53.697056       1 config.go:328] "Starting node config controller"
	I0914 00:28:53.697658       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:28:53.797202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:28:53.797256       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:28:53.799075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f8fe88c9048189a1976c2f59ed33da7b9533bdae402eac3ae6b3a096569666f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:22:20.332273       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:22:20.354364       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0914 00:22:20.354502       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:22:20.412851       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:22:20.412889       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:22:20.412917       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:22:20.416683       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:22:20.420905       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:22:20.421013       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:22:20.423294       1 config.go:199] "Starting service config controller"
	I0914 00:22:20.423364       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:22:20.423408       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:22:20.423424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:22:20.424204       1 config.go:328] "Starting node config controller"
	I0914 00:22:20.425678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:22:20.524387       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:22:20.524410       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:22:20.525851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b91527355f6edb242e1a8a38d7019f2dda2b86d619a83f2be111c76d44669d4d] <==
	I0914 00:28:49.464036       1 serving.go:386] Generated self-signed cert in-memory
	W0914 00:28:51.960391       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 00:28:51.960551       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 00:28:51.960588       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 00:28:51.960619       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 00:28:52.026428       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 00:28:52.026734       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:28:52.029117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:28:52.029201       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:28:52.029989       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 00:28:52.030106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 00:28:52.129688       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cc34260f155542e6694d882d220672df396a4a1a2285b1d7072d137f17bab7f0] <==
	E0914 00:22:12.402902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.455395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:22:12.455440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.484728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:22:12.484875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.531688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.531987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.595571       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:22:12.595620       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 00:22:12.599173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:22:12.599218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.626030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.626082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.638384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:12.638434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.641024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:22:12.641070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.750649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:22:12.750702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.764088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 00:22:12.764137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:12.879497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 00:22:12.879565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 00:22:15.085465       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 00:27:12.438131       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 00:31:37 multinode-209237 kubelet[2907]: E0914 00:31:37.837672    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273897837021010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:31:47 multinode-209237 kubelet[2907]: E0914 00:31:47.801345    2907 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:31:47 multinode-209237 kubelet[2907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:31:47 multinode-209237 kubelet[2907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:31:47 multinode-209237 kubelet[2907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:31:47 multinode-209237 kubelet[2907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:31:47 multinode-209237 kubelet[2907]: E0914 00:31:47.838896    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273907838640428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:31:47 multinode-209237 kubelet[2907]: E0914 00:31:47.838931    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273907838640428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:31:57 multinode-209237 kubelet[2907]: E0914 00:31:57.841455    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273917841104281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:31:57 multinode-209237 kubelet[2907]: E0914 00:31:57.841510    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273917841104281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:07 multinode-209237 kubelet[2907]: E0914 00:32:07.842795    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273927842265427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:07 multinode-209237 kubelet[2907]: E0914 00:32:07.842825    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273927842265427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:17 multinode-209237 kubelet[2907]: E0914 00:32:17.845512    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273937844593038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:17 multinode-209237 kubelet[2907]: E0914 00:32:17.845540    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273937844593038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:27 multinode-209237 kubelet[2907]: E0914 00:32:27.848245    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273947847840127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:27 multinode-209237 kubelet[2907]: E0914 00:32:27.848702    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273947847840127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:37 multinode-209237 kubelet[2907]: E0914 00:32:37.851495    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273957850611064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:37 multinode-209237 kubelet[2907]: E0914 00:32:37.852106    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273957850611064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:47 multinode-209237 kubelet[2907]: E0914 00:32:47.802134    2907 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:32:47 multinode-209237 kubelet[2907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:32:47 multinode-209237 kubelet[2907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:32:47 multinode-209237 kubelet[2907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:32:47 multinode-209237 kubelet[2907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:32:47 multinode-209237 kubelet[2907]: E0914 00:32:47.853315    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273967853068737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:32:47 multinode-209237 kubelet[2907]: E0914 00:32:47.853336    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726273967853068737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:32:55.325100   45583 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19640-5422/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-209237 -n multinode-209237
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-209237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.36s)

                                                
                                    
x
+
TestPreload (203.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-847638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0914 00:37:20.623920   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-847638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.611086874s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-847638 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-847638 image pull gcr.io/k8s-minikube/busybox: (3.403858782s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-847638
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-847638: (6.573825403s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-847638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0914 00:39:14.607949   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:31.535006   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-847638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.516983007s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-847638 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-14 00:40:08.663325166 +0000 UTC m=+4423.090970369
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-847638 -n test-preload-847638
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-847638 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-847638 logs -n 25: (1.064111927s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237 sudo cat                                       | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt                       | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m02:/home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n                                                                 | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | multinode-209237-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-209237 ssh -n multinode-209237-m02 sudo cat                                   | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-209237 node stop m03                                                          | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:24 UTC |
	| node    | multinode-209237 node start                                                             | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:24 UTC | 14 Sep 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| stop    | -p multinode-209237                                                                     | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:25 UTC |                     |
	| start   | -p multinode-209237                                                                     | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:27 UTC | 14 Sep 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC |                     |
	| node    | multinode-209237 node delete                                                            | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC | 14 Sep 24 00:30 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-209237 stop                                                                   | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:30 UTC |                     |
	| start   | -p multinode-209237                                                                     | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:32 UTC | 14 Sep 24 00:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-209237                                                                | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC |                     |
	| start   | -p multinode-209237-m02                                                                 | multinode-209237-m02 | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-209237-m03                                                                 | multinode-209237-m03 | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC | 14 Sep 24 00:36 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-209237                                                                 | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC |                     |
	| delete  | -p multinode-209237-m03                                                                 | multinode-209237-m03 | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC | 14 Sep 24 00:36 UTC |
	| delete  | -p multinode-209237                                                                     | multinode-209237     | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC | 14 Sep 24 00:36 UTC |
	| start   | -p test-preload-847638                                                                  | test-preload-847638  | jenkins | v1.34.0 | 14 Sep 24 00:36 UTC | 14 Sep 24 00:38 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-847638 image pull                                                          | test-preload-847638  | jenkins | v1.34.0 | 14 Sep 24 00:38 UTC | 14 Sep 24 00:38 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-847638                                                                  | test-preload-847638  | jenkins | v1.34.0 | 14 Sep 24 00:38 UTC | 14 Sep 24 00:39 UTC |
	| start   | -p test-preload-847638                                                                  | test-preload-847638  | jenkins | v1.34.0 | 14 Sep 24 00:39 UTC | 14 Sep 24 00:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-847638 image list                                                          | test-preload-847638  | jenkins | v1.34.0 | 14 Sep 24 00:40 UTC | 14 Sep 24 00:40 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:39:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:39:03.973099   48055 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:39:03.973336   48055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:03.973344   48055 out.go:358] Setting ErrFile to fd 2...
	I0914 00:39:03.973348   48055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:03.973534   48055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:39:03.974063   48055 out.go:352] Setting JSON to false
	I0914 00:39:03.974987   48055 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4890,"bootTime":1726269454,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:39:03.975077   48055 start.go:139] virtualization: kvm guest
	I0914 00:39:03.977388   48055 out.go:177] * [test-preload-847638] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:39:03.979304   48055 notify.go:220] Checking for updates...
	I0914 00:39:03.979319   48055 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:39:03.980902   48055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:39:03.982448   48055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:39:03.984110   48055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:39:03.985650   48055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:39:03.987006   48055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:39:03.988647   48055 config.go:182] Loaded profile config "test-preload-847638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 00:39:03.989055   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:03.989107   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:04.003934   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0914 00:39:04.004452   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:04.005042   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:04.005061   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:04.005368   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:04.005544   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:04.007373   48055 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 00:39:04.008716   48055 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:39:04.009032   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:04.009068   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:04.023799   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0914 00:39:04.024300   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:04.024783   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:04.024805   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:04.025086   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:04.025274   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:04.061096   48055 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:39:04.062370   48055 start.go:297] selected driver: kvm2
	I0914 00:39:04.062391   48055 start.go:901] validating driver "kvm2" against &{Name:test-preload-847638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-847638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:39:04.062519   48055 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:39:04.063253   48055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:39:04.063344   48055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:39:04.078889   48055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:39:04.079296   48055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:39:04.079332   48055 cni.go:84] Creating CNI manager for ""
	I0914 00:39:04.079382   48055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:39:04.079454   48055 start.go:340] cluster config:
	{Name:test-preload-847638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-847638 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:39:04.079590   48055 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:39:04.081582   48055 out.go:177] * Starting "test-preload-847638" primary control-plane node in "test-preload-847638" cluster
	I0914 00:39:04.082861   48055 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 00:39:04.551738   48055 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0914 00:39:04.551816   48055 cache.go:56] Caching tarball of preloaded images
	I0914 00:39:04.551999   48055 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 00:39:04.553921   48055 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0914 00:39:04.555660   48055 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 00:39:04.654414   48055 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0914 00:39:15.957541   48055 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 00:39:15.957639   48055 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 00:39:16.799131   48055 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0914 00:39:16.799260   48055 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/config.json ...
	I0914 00:39:16.799502   48055 start.go:360] acquireMachinesLock for test-preload-847638: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:39:16.799563   48055 start.go:364] duration metric: took 41.789µs to acquireMachinesLock for "test-preload-847638"
	I0914 00:39:16.799579   48055 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:39:16.799584   48055 fix.go:54] fixHost starting: 
	I0914 00:39:16.799871   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:16.799905   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:16.814612   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0914 00:39:16.815046   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:16.815563   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:16.815588   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:16.815966   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:16.816239   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:16.816384   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetState
	I0914 00:39:16.818039   48055 fix.go:112] recreateIfNeeded on test-preload-847638: state=Stopped err=<nil>
	I0914 00:39:16.818073   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	W0914 00:39:16.818209   48055 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:39:16.820586   48055 out.go:177] * Restarting existing kvm2 VM for "test-preload-847638" ...
	I0914 00:39:16.821909   48055 main.go:141] libmachine: (test-preload-847638) Calling .Start
	I0914 00:39:16.822108   48055 main.go:141] libmachine: (test-preload-847638) Ensuring networks are active...
	I0914 00:39:16.822776   48055 main.go:141] libmachine: (test-preload-847638) Ensuring network default is active
	I0914 00:39:16.823233   48055 main.go:141] libmachine: (test-preload-847638) Ensuring network mk-test-preload-847638 is active
	I0914 00:39:16.823810   48055 main.go:141] libmachine: (test-preload-847638) Getting domain xml...
	I0914 00:39:16.824746   48055 main.go:141] libmachine: (test-preload-847638) Creating domain...
	I0914 00:39:18.030789   48055 main.go:141] libmachine: (test-preload-847638) Waiting to get IP...
	I0914 00:39:18.031552   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:18.031935   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:18.032041   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:18.031923   48122 retry.go:31] will retry after 259.866122ms: waiting for machine to come up
	I0914 00:39:18.293606   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:18.294050   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:18.294076   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:18.294018   48122 retry.go:31] will retry after 264.653177ms: waiting for machine to come up
	I0914 00:39:18.560587   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:18.560974   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:18.561003   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:18.560925   48122 retry.go:31] will retry after 328.854466ms: waiting for machine to come up
	I0914 00:39:18.891570   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:18.892020   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:18.892049   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:18.891972   48122 retry.go:31] will retry after 412.545617ms: waiting for machine to come up
	I0914 00:39:19.306784   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:19.307314   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:19.307348   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:19.307268   48122 retry.go:31] will retry after 630.996467ms: waiting for machine to come up
	I0914 00:39:19.940415   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:19.941020   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:19.941044   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:19.940984   48122 retry.go:31] will retry after 947.143319ms: waiting for machine to come up
	I0914 00:39:20.890165   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:20.890572   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:20.890625   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:20.890527   48122 retry.go:31] will retry after 876.455137ms: waiting for machine to come up
	I0914 00:39:21.768974   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:21.769451   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:21.769487   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:21.769383   48122 retry.go:31] will retry after 1.05643323s: waiting for machine to come up
	I0914 00:39:22.827082   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:22.827534   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:22.827564   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:22.827498   48122 retry.go:31] will retry after 1.700804941s: waiting for machine to come up
	I0914 00:39:24.530402   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:24.530904   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:24.530930   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:24.530858   48122 retry.go:31] will retry after 1.84589486s: waiting for machine to come up
	I0914 00:39:26.378850   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:26.379173   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:26.379200   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:26.379123   48122 retry.go:31] will retry after 2.896303187s: waiting for machine to come up
	I0914 00:39:29.278815   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:29.279288   48055 main.go:141] libmachine: (test-preload-847638) DBG | unable to find current IP address of domain test-preload-847638 in network mk-test-preload-847638
	I0914 00:39:29.279323   48055 main.go:141] libmachine: (test-preload-847638) DBG | I0914 00:39:29.279239   48122 retry.go:31] will retry after 3.635763745s: waiting for machine to come up
	I0914 00:39:32.916278   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:32.916754   48055 main.go:141] libmachine: (test-preload-847638) Found IP for machine: 192.168.39.203
	I0914 00:39:32.916776   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has current primary IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:32.916784   48055 main.go:141] libmachine: (test-preload-847638) Reserving static IP address...
	I0914 00:39:32.917274   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "test-preload-847638", mac: "52:54:00:19:fd:33", ip: "192.168.39.203"} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:32.917301   48055 main.go:141] libmachine: (test-preload-847638) DBG | skip adding static IP to network mk-test-preload-847638 - found existing host DHCP lease matching {name: "test-preload-847638", mac: "52:54:00:19:fd:33", ip: "192.168.39.203"}
	I0914 00:39:32.917317   48055 main.go:141] libmachine: (test-preload-847638) DBG | Getting to WaitForSSH function...
	I0914 00:39:32.917323   48055 main.go:141] libmachine: (test-preload-847638) Reserved static IP address: 192.168.39.203
	I0914 00:39:32.917332   48055 main.go:141] libmachine: (test-preload-847638) Waiting for SSH to be available...
	I0914 00:39:32.919552   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:32.919884   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:32.919911   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:32.920059   48055 main.go:141] libmachine: (test-preload-847638) DBG | Using SSH client type: external
	I0914 00:39:32.920088   48055 main.go:141] libmachine: (test-preload-847638) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa (-rw-------)
	I0914 00:39:32.920126   48055 main.go:141] libmachine: (test-preload-847638) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:39:32.920176   48055 main.go:141] libmachine: (test-preload-847638) DBG | About to run SSH command:
	I0914 00:39:32.920189   48055 main.go:141] libmachine: (test-preload-847638) DBG | exit 0
	I0914 00:39:33.043739   48055 main.go:141] libmachine: (test-preload-847638) DBG | SSH cmd err, output: <nil>: 
	I0914 00:39:33.044083   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetConfigRaw
	I0914 00:39:33.044751   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetIP
	I0914 00:39:33.047161   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.047475   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.047515   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.047765   48055 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/config.json ...
	I0914 00:39:33.048003   48055 machine.go:93] provisionDockerMachine start ...
	I0914 00:39:33.048023   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:33.048235   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.050705   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.051023   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.051051   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.051168   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.051374   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.051568   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.051691   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.051869   48055 main.go:141] libmachine: Using SSH client type: native
	I0914 00:39:33.052088   48055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0914 00:39:33.052100   48055 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:39:33.159825   48055 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 00:39:33.159851   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetMachineName
	I0914 00:39:33.160093   48055 buildroot.go:166] provisioning hostname "test-preload-847638"
	I0914 00:39:33.160126   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetMachineName
	I0914 00:39:33.160388   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.163014   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.163454   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.163496   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.163654   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.163844   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.163992   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.164116   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.164273   48055 main.go:141] libmachine: Using SSH client type: native
	I0914 00:39:33.164440   48055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0914 00:39:33.164455   48055 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-847638 && echo "test-preload-847638" | sudo tee /etc/hostname
	I0914 00:39:33.285325   48055 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-847638
	
	I0914 00:39:33.285352   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.287917   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.288194   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.288222   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.288435   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.288633   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.288794   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.288895   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.289031   48055 main.go:141] libmachine: Using SSH client type: native
	I0914 00:39:33.289238   48055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0914 00:39:33.289257   48055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-847638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-847638/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-847638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:39:33.404231   48055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:39:33.404271   48055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:39:33.404305   48055 buildroot.go:174] setting up certificates
	I0914 00:39:33.404338   48055 provision.go:84] configureAuth start
	I0914 00:39:33.404352   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetMachineName
	I0914 00:39:33.404626   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetIP
	I0914 00:39:33.407550   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.407939   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.407967   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.408111   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.410061   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.410364   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.410384   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.410542   48055 provision.go:143] copyHostCerts
	I0914 00:39:33.410594   48055 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:39:33.410604   48055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:39:33.410666   48055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:39:33.410754   48055 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:39:33.410762   48055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:39:33.410785   48055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:39:33.410836   48055 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:39:33.410842   48055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:39:33.410864   48055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:39:33.410910   48055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.test-preload-847638 san=[127.0.0.1 192.168.39.203 localhost minikube test-preload-847638]
	I0914 00:39:33.508699   48055 provision.go:177] copyRemoteCerts
	I0914 00:39:33.508763   48055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:39:33.508783   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.511101   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.511355   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.511387   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.511537   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.511741   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.511889   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.512058   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:33.593457   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:39:33.618335   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 00:39:33.641703   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 00:39:33.665223   48055 provision.go:87] duration metric: took 260.869626ms to configureAuth
	I0914 00:39:33.665256   48055 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:39:33.665434   48055 config.go:182] Loaded profile config "test-preload-847638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 00:39:33.665506   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.668216   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.668539   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.668582   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.668835   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.669030   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.669227   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.669387   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.669550   48055 main.go:141] libmachine: Using SSH client type: native
	I0914 00:39:33.669706   48055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0914 00:39:33.669720   48055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:39:33.887467   48055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:39:33.887489   48055 machine.go:96] duration metric: took 839.470993ms to provisionDockerMachine
	I0914 00:39:33.887507   48055 start.go:293] postStartSetup for "test-preload-847638" (driver="kvm2")
	I0914 00:39:33.887519   48055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:39:33.887536   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:33.887826   48055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:39:33.887849   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:33.890304   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.890630   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:33.890660   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:33.890787   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:33.890970   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:33.891094   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:33.891239   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:33.973980   48055 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:39:33.977955   48055 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:39:33.977980   48055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:39:33.978089   48055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:39:33.978191   48055 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:39:33.978324   48055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:39:33.986997   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:39:34.009301   48055 start.go:296] duration metric: took 121.780854ms for postStartSetup
	I0914 00:39:34.009338   48055 fix.go:56] duration metric: took 17.209753903s for fixHost
	I0914 00:39:34.009360   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:34.011711   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.012004   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:34.012029   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.012142   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:34.012379   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:34.012548   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:34.012699   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:34.012854   48055 main.go:141] libmachine: Using SSH client type: native
	I0914 00:39:34.013044   48055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0914 00:39:34.013056   48055 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:39:34.120371   48055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726274374.094733770
	
	I0914 00:39:34.120396   48055 fix.go:216] guest clock: 1726274374.094733770
	I0914 00:39:34.120406   48055 fix.go:229] Guest: 2024-09-14 00:39:34.09473377 +0000 UTC Remote: 2024-09-14 00:39:34.009342819 +0000 UTC m=+30.071209911 (delta=85.390951ms)
	I0914 00:39:34.120428   48055 fix.go:200] guest clock delta is within tolerance: 85.390951ms
	I0914 00:39:34.120434   48055 start.go:83] releasing machines lock for "test-preload-847638", held for 17.320859816s
	I0914 00:39:34.120452   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:34.120771   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetIP
	I0914 00:39:34.123387   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.123764   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:34.123808   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.123973   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:34.124516   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:34.124681   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:34.124820   48055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:39:34.124868   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:34.124883   48055 ssh_runner.go:195] Run: cat /version.json
	I0914 00:39:34.124899   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:34.127530   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.127610   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.127911   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:34.127940   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.127979   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:34.128003   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:34.128068   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:34.128235   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:34.128254   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:34.128388   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:34.128398   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:34.128539   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:34.128539   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:34.128693   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:34.235711   48055 ssh_runner.go:195] Run: systemctl --version
	I0914 00:39:34.241571   48055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:39:34.386109   48055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:39:34.392335   48055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:39:34.392395   48055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:39:34.407716   48055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:39:34.407740   48055 start.go:495] detecting cgroup driver to use...
	I0914 00:39:34.407810   48055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:39:34.425149   48055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:39:34.438532   48055 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:39:34.438601   48055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:39:34.451539   48055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:39:34.464416   48055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:39:34.577303   48055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:39:34.717607   48055 docker.go:233] disabling docker service ...
	I0914 00:39:34.717662   48055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:39:34.731652   48055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:39:34.744169   48055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:39:34.880578   48055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:39:34.999016   48055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:39:35.013503   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:39:35.031804   48055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0914 00:39:35.031875   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.041666   48055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:39:35.041751   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.053935   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.064470   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.075029   48055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:39:35.086377   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.096965   48055 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.113770   48055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:39:35.123755   48055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:39:35.133045   48055 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:39:35.133109   48055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:39:35.146100   48055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:39:35.155428   48055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:39:35.258571   48055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:39:35.350470   48055 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:39:35.350538   48055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:39:35.355176   48055 start.go:563] Will wait 60s for crictl version
	I0914 00:39:35.355246   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:35.359216   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:39:35.404342   48055 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:39:35.404430   48055 ssh_runner.go:195] Run: crio --version
	I0914 00:39:35.431464   48055 ssh_runner.go:195] Run: crio --version
	I0914 00:39:35.459884   48055 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0914 00:39:35.460896   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetIP
	I0914 00:39:35.463427   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:35.463753   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:35.463770   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:35.464031   48055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:39:35.467981   48055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:39:35.480550   48055 kubeadm.go:883] updating cluster {Name:test-preload-847638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-847638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:39:35.480651   48055 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 00:39:35.480692   48055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:39:35.516224   48055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0914 00:39:35.516288   48055 ssh_runner.go:195] Run: which lz4
	I0914 00:39:35.520062   48055 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 00:39:35.523732   48055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 00:39:35.523765   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0914 00:39:36.981253   48055 crio.go:462] duration metric: took 1.461229594s to copy over tarball
	I0914 00:39:36.981330   48055 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 00:39:39.317103   48055 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.335739565s)
	I0914 00:39:39.317136   48055 crio.go:469] duration metric: took 2.335854183s to extract the tarball
	I0914 00:39:39.317146   48055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 00:39:39.357464   48055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:39:39.398537   48055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0914 00:39:39.398569   48055 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 00:39:39.398635   48055 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.398676   48055 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:39.398635   48055 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:39:39.398719   48055 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.398728   48055 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:39.398678   48055 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.398714   48055 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 00:39:39.398750   48055 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:39.400164   48055 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 00:39:39.400174   48055 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.400176   48055 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:39.400182   48055 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:39.400193   48055 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:39:39.400164   48055 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.400222   48055 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.400228   48055 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:39.606632   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.626271   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.634290   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 00:39:39.637694   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.660085   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:39.660707   48055 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0914 00:39:39.660745   48055 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.660782   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.663332   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:39.698861   48055 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0914 00:39:39.698901   48055 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.698962   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.698992   48055 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0914 00:39:39.699028   48055 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0914 00:39:39.699067   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.752565   48055 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0914 00:39:39.752611   48055 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.752659   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.756627   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:39.770719   48055 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0914 00:39:39.770771   48055 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:39.770804   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.770814   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.770851   48055 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0914 00:39:39.770882   48055 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:39.770890   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.770917   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.770938   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 00:39:39.771000   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.883483   48055 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0914 00:39:39.883533   48055 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:39.883589   48055 ssh_runner.go:195] Run: which crictl
	I0914 00:39:39.885262   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:39.885309   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.901267   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:39.901292   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:39.901311   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:39.901378   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 00:39:39.901391   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:39.958818   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 00:39:39.994219   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 00:39:40.068934   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 00:39:40.068934   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:40.069052   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:40.069064   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:40.069141   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 00:39:40.077255   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0914 00:39:40.077358   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 00:39:40.118983   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0914 00:39:40.119079   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 00:39:40.189010   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0914 00:39:40.189125   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 00:39:40.189133   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 00:39:40.204362   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 00:39:40.204405   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0914 00:39:40.204492   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 00:39:40.204497   48055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 00:39:40.204518   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0914 00:39:40.204545   48055 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 00:39:40.204560   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0914 00:39:40.204588   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 00:39:40.243887   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 00:39:40.243910   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0914 00:39:40.243984   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 00:39:40.275036   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0914 00:39:40.275084   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0914 00:39:40.275150   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 00:39:40.281337   48055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0914 00:39:40.281419   48055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 00:39:40.561282   48055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:39:42.908938   48055 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.704328259s)
	I0914 00:39:42.908964   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0914 00:39:42.908987   48055 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 00:39:42.909025   48055 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.665020101s)
	I0914 00:39:42.909068   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0914 00:39:42.909034   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 00:39:42.909108   48055 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.633919725s)
	I0914 00:39:42.909136   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0914 00:39:42.909173   48055 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.627742643s)
	I0914 00:39:42.909186   48055 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0914 00:39:42.909209   48055 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.347899108s)
	I0914 00:39:43.655678   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0914 00:39:43.655722   48055 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 00:39:43.655775   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0914 00:39:43.796664   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0914 00:39:43.796716   48055 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 00:39:43.796782   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0914 00:39:45.938595   48055 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.141785147s)
	I0914 00:39:45.938644   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 00:39:45.938678   48055 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 00:39:45.938728   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0914 00:39:46.282326   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 00:39:46.282378   48055 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 00:39:46.282423   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 00:39:47.029782   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0914 00:39:47.029830   48055 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 00:39:47.029881   48055 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 00:39:47.474008   48055 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0914 00:39:47.474071   48055 cache_images.go:123] Successfully loaded all cached images
	I0914 00:39:47.474080   48055 cache_images.go:92] duration metric: took 8.075498844s to LoadCachedImages
	I0914 00:39:47.474097   48055 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.24.4 crio true true} ...
	I0914 00:39:47.474239   48055 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-847638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-847638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:39:47.474330   48055 ssh_runner.go:195] Run: crio config
	I0914 00:39:47.521520   48055 cni.go:84] Creating CNI manager for ""
	I0914 00:39:47.521540   48055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:39:47.521549   48055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:39:47.521567   48055 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-847638 NodeName:test-preload-847638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:39:47.521695   48055 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-847638"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:39:47.521762   48055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0914 00:39:47.531303   48055 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:39:47.531376   48055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:39:47.540407   48055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0914 00:39:47.556470   48055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:39:47.572769   48055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0914 00:39:47.589167   48055 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0914 00:39:47.592820   48055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:39:47.604071   48055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:39:47.720842   48055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:39:47.737281   48055 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638 for IP: 192.168.39.203
	I0914 00:39:47.737304   48055 certs.go:194] generating shared ca certs ...
	I0914 00:39:47.737319   48055 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:39:47.737463   48055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:39:47.737504   48055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:39:47.737512   48055 certs.go:256] generating profile certs ...
	I0914 00:39:47.737592   48055 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/client.key
	I0914 00:39:47.737658   48055 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/apiserver.key.3e8d33bb
	I0914 00:39:47.737692   48055 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/proxy-client.key
	I0914 00:39:47.737798   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:39:47.737824   48055 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:39:47.737831   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:39:47.737855   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:39:47.737887   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:39:47.737921   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:39:47.737981   48055 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:39:47.738668   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:39:47.774263   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:39:47.807116   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:39:47.839624   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:39:47.878153   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 00:39:47.910099   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:39:47.943088   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:39:47.967771   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 00:39:47.991123   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:39:48.014554   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:39:48.036966   48055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:39:48.058528   48055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:39:48.075140   48055 ssh_runner.go:195] Run: openssl version
	I0914 00:39:48.080745   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:39:48.091586   48055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:39:48.095917   48055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:39:48.095973   48055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:39:48.101828   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:39:48.112543   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:39:48.122688   48055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:39:48.126685   48055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:39:48.126752   48055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:39:48.132039   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:39:48.142042   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:39:48.152045   48055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:39:48.156125   48055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:39:48.156172   48055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:39:48.161501   48055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:39:48.171779   48055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:39:48.176111   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:39:48.181777   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:39:48.187226   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:39:48.192959   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:39:48.198423   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:39:48.204002   48055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:39:48.209282   48055 kubeadm.go:392] StartCluster: {Name:test-preload-847638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-847638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:39:48.209374   48055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:39:48.209412   48055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:39:48.244756   48055 cri.go:89] found id: ""
	I0914 00:39:48.244846   48055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:39:48.254732   48055 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 00:39:48.254753   48055 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 00:39:48.254796   48055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 00:39:48.264087   48055 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:39:48.264552   48055 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-847638" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:39:48.264688   48055 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-847638" cluster setting kubeconfig missing "test-preload-847638" context setting]
	I0914 00:39:48.264946   48055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:39:48.265495   48055 kapi.go:59] client config for test-preload-847638: &rest.Config{Host:"https://192.168.39.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 00:39:48.266081   48055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 00:39:48.275131   48055 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.203
	I0914 00:39:48.275163   48055 kubeadm.go:1160] stopping kube-system containers ...
	I0914 00:39:48.275176   48055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 00:39:48.275224   48055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:39:48.309046   48055 cri.go:89] found id: ""
	I0914 00:39:48.309145   48055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 00:39:48.325163   48055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:39:48.334322   48055 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:39:48.334344   48055 kubeadm.go:157] found existing configuration files:
	
	I0914 00:39:48.334401   48055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:39:48.342938   48055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:39:48.343001   48055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:39:48.351850   48055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:39:48.360343   48055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:39:48.360393   48055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:39:48.369674   48055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:39:48.378354   48055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:39:48.378417   48055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:39:48.387362   48055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:39:48.395910   48055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:39:48.395983   48055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:39:48.404877   48055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:39:48.414070   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:48.508950   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:49.154054   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:49.405727   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:49.463753   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:49.591359   48055 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:39:49.591463   48055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:39:50.091545   48055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:39:50.592240   48055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:39:50.609411   48055 api_server.go:72] duration metric: took 1.018051148s to wait for apiserver process to appear ...
	I0914 00:39:50.609443   48055 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:39:50.609473   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:50.609996   48055 api_server.go:269] stopped: https://192.168.39.203:8443/healthz: Get "https://192.168.39.203:8443/healthz": dial tcp 192.168.39.203:8443: connect: connection refused
	I0914 00:39:51.109546   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:51.110169   48055 api_server.go:269] stopped: https://192.168.39.203:8443/healthz: Get "https://192.168.39.203:8443/healthz": dial tcp 192.168.39.203:8443: connect: connection refused
	I0914 00:39:51.609720   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:54.755317   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 00:39:54.755347   48055 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 00:39:54.755364   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:54.771207   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 00:39:54.771235   48055 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 00:39:55.109632   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:55.114609   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 00:39:55.114634   48055 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 00:39:55.610282   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:55.616369   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 00:39:55.616399   48055 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 00:39:56.109974   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:39:56.115284   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0914 00:39:56.122484   48055 api_server.go:141] control plane version: v1.24.4
	I0914 00:39:56.122517   48055 api_server.go:131] duration metric: took 5.513065515s to wait for apiserver health ...
	I0914 00:39:56.122528   48055 cni.go:84] Creating CNI manager for ""
	I0914 00:39:56.122536   48055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:39:56.124832   48055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 00:39:56.126610   48055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 00:39:56.142695   48055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 00:39:56.161185   48055 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:39:56.161269   48055 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 00:39:56.161283   48055 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 00:39:56.172253   48055 system_pods.go:59] 7 kube-system pods found
	I0914 00:39:56.172289   48055 system_pods.go:61] "coredns-6d4b75cb6d-mq5l6" [17c851e9-373f-4197-b652-d254884017e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 00:39:56.172299   48055 system_pods.go:61] "etcd-test-preload-847638" [76abd956-30db-4489-950d-29ba335971a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 00:39:56.172305   48055 system_pods.go:61] "kube-apiserver-test-preload-847638" [11f36f49-ff3e-49d0-bb4f-994fab613cab] Running
	I0914 00:39:56.172310   48055 system_pods.go:61] "kube-controller-manager-test-preload-847638" [aa05cc6f-7947-4932-8695-1678659b28b4] Running
	I0914 00:39:56.172315   48055 system_pods.go:61] "kube-proxy-8rbgf" [204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 00:39:56.172318   48055 system_pods.go:61] "kube-scheduler-test-preload-847638" [c56b6211-579c-4f39-9c30-67a0da3a1717] Running
	I0914 00:39:56.172326   48055 system_pods.go:61] "storage-provisioner" [5a3f77b2-7202-448c-8f43-77366fcf4efc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 00:39:56.172332   48055 system_pods.go:74] duration metric: took 11.123723ms to wait for pod list to return data ...
	I0914 00:39:56.172339   48055 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:39:56.175853   48055 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:39:56.175882   48055 node_conditions.go:123] node cpu capacity is 2
	I0914 00:39:56.175893   48055 node_conditions.go:105] duration metric: took 3.549844ms to run NodePressure ...
	I0914 00:39:56.175908   48055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:39:56.362418   48055 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 00:39:56.366764   48055 kubeadm.go:739] kubelet initialised
	I0914 00:39:56.366788   48055 kubeadm.go:740] duration metric: took 4.343995ms waiting for restarted kubelet to initialise ...
	I0914 00:39:56.366795   48055 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:39:56.374560   48055 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:56.380604   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.380629   48055 pod_ready.go:82] duration metric: took 6.043009ms for pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:56.380638   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.380645   48055 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:56.386588   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "etcd-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.386610   48055 pod_ready.go:82] duration metric: took 5.957557ms for pod "etcd-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:56.386619   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "etcd-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.386625   48055 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:56.392337   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "kube-apiserver-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.392363   48055 pod_ready.go:82] duration metric: took 5.729338ms for pod "kube-apiserver-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:56.392371   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "kube-apiserver-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.392378   48055 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:56.565572   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.565604   48055 pod_ready.go:82] duration metric: took 173.216223ms for pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:56.565617   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.565627   48055 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8rbgf" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:56.965637   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "kube-proxy-8rbgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.965668   48055 pod_ready.go:82] duration metric: took 400.024018ms for pod "kube-proxy-8rbgf" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:56.965678   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "kube-proxy-8rbgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:56.965685   48055 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:39:57.365084   48055 pod_ready.go:98] node "test-preload-847638" hosting pod "kube-scheduler-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:57.365109   48055 pod_ready.go:82] duration metric: took 399.417596ms for pod "kube-scheduler-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	E0914 00:39:57.365118   48055 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-847638" hosting pod "kube-scheduler-test-preload-847638" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-847638" has status "Ready":"False"
	I0914 00:39:57.365130   48055 pod_ready.go:39] duration metric: took 998.319913ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:39:57.365152   48055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:39:57.376961   48055 ops.go:34] apiserver oom_adj: -16
	I0914 00:39:57.376983   48055 kubeadm.go:597] duration metric: took 9.122224333s to restartPrimaryControlPlane
	I0914 00:39:57.376994   48055 kubeadm.go:394] duration metric: took 9.167720141s to StartCluster
	I0914 00:39:57.377033   48055 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:39:57.377112   48055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:39:57.377762   48055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:39:57.377980   48055 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:39:57.378128   48055 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:39:57.378224   48055 addons.go:69] Setting storage-provisioner=true in profile "test-preload-847638"
	I0914 00:39:57.378242   48055 addons.go:69] Setting default-storageclass=true in profile "test-preload-847638"
	I0914 00:39:57.378250   48055 addons.go:234] Setting addon storage-provisioner=true in "test-preload-847638"
	W0914 00:39:57.378258   48055 addons.go:243] addon storage-provisioner should already be in state true
	I0914 00:39:57.378269   48055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-847638"
	I0914 00:39:57.378289   48055 host.go:66] Checking if "test-preload-847638" exists ...
	I0914 00:39:57.378326   48055 config.go:182] Loaded profile config "test-preload-847638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 00:39:57.378642   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:57.378689   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:57.378718   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:57.378749   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:57.379744   48055 out.go:177] * Verifying Kubernetes components...
	I0914 00:39:57.381029   48055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:39:57.393431   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0914 00:39:57.393906   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:57.394488   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:57.394512   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:57.394765   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:57.394974   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetState
	I0914 00:39:57.397027   48055 kapi.go:59] client config for test-preload-847638: &rest.Config{Host:"https://192.168.39.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/profiles/test-preload-847638/client.key", CAFile:"/home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 00:39:57.397286   48055 addons.go:234] Setting addon default-storageclass=true in "test-preload-847638"
	W0914 00:39:57.397302   48055 addons.go:243] addon default-storageclass should already be in state true
	I0914 00:39:57.397325   48055 host.go:66] Checking if "test-preload-847638" exists ...
	I0914 00:39:57.397591   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:57.397633   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:57.399037   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0914 00:39:57.399522   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:57.399965   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:57.399983   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:57.400331   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:57.400896   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:57.400938   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:57.412542   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0914 00:39:57.413062   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:57.413605   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:57.413629   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:57.414014   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:57.414466   48055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:39:57.414523   48055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:39:57.415208   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0914 00:39:57.432667   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:57.433322   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:57.433350   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:57.433746   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:57.433938   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetState
	I0914 00:39:57.435942   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:57.437599   48055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:39:57.438662   48055 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:39:57.438687   48055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:39:57.438703   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:57.441826   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:57.442358   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:57.442391   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:57.442588   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:57.442734   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:57.442874   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:57.443014   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:57.447697   48055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I0914 00:39:57.448191   48055 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:39:57.448751   48055 main.go:141] libmachine: Using API Version  1
	I0914 00:39:57.448773   48055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:39:57.449137   48055 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:39:57.449339   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetState
	I0914 00:39:57.450950   48055 main.go:141] libmachine: (test-preload-847638) Calling .DriverName
	I0914 00:39:57.451174   48055 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:39:57.451191   48055 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:39:57.451206   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHHostname
	I0914 00:39:57.454404   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:57.454805   48055 main.go:141] libmachine: (test-preload-847638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:33", ip: ""} in network mk-test-preload-847638: {Iface:virbr1 ExpiryTime:2024-09-14 01:39:27 +0000 UTC Type:0 Mac:52:54:00:19:fd:33 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-847638 Clientid:01:52:54:00:19:fd:33}
	I0914 00:39:57.454827   48055 main.go:141] libmachine: (test-preload-847638) DBG | domain test-preload-847638 has defined IP address 192.168.39.203 and MAC address 52:54:00:19:fd:33 in network mk-test-preload-847638
	I0914 00:39:57.454944   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHPort
	I0914 00:39:57.455137   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHKeyPath
	I0914 00:39:57.455293   48055 main.go:141] libmachine: (test-preload-847638) Calling .GetSSHUsername
	I0914 00:39:57.455423   48055 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/test-preload-847638/id_rsa Username:docker}
	I0914 00:39:57.549665   48055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:39:57.565495   48055 node_ready.go:35] waiting up to 6m0s for node "test-preload-847638" to be "Ready" ...
	I0914 00:39:57.669573   48055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:39:57.689650   48055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:39:58.611057   48055 main.go:141] libmachine: Making call to close driver server
	I0914 00:39:58.611079   48055 main.go:141] libmachine: (test-preload-847638) Calling .Close
	I0914 00:39:58.611437   48055 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:39:58.611455   48055 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:39:58.611455   48055 main.go:141] libmachine: (test-preload-847638) DBG | Closing plugin on server side
	I0914 00:39:58.611464   48055 main.go:141] libmachine: Making call to close driver server
	I0914 00:39:58.611472   48055 main.go:141] libmachine: (test-preload-847638) Calling .Close
	I0914 00:39:58.611712   48055 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:39:58.611723   48055 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:39:58.618684   48055 main.go:141] libmachine: Making call to close driver server
	I0914 00:39:58.618702   48055 main.go:141] libmachine: (test-preload-847638) Calling .Close
	I0914 00:39:58.618930   48055 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:39:58.618948   48055 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:39:58.618968   48055 main.go:141] libmachine: (test-preload-847638) DBG | Closing plugin on server side
	I0914 00:39:58.651604   48055 main.go:141] libmachine: Making call to close driver server
	I0914 00:39:58.651633   48055 main.go:141] libmachine: (test-preload-847638) Calling .Close
	I0914 00:39:58.652015   48055 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:39:58.652028   48055 main.go:141] libmachine: (test-preload-847638) DBG | Closing plugin on server side
	I0914 00:39:58.652035   48055 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:39:58.652047   48055 main.go:141] libmachine: Making call to close driver server
	I0914 00:39:58.652054   48055 main.go:141] libmachine: (test-preload-847638) Calling .Close
	I0914 00:39:58.652266   48055 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:39:58.652295   48055 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:39:58.652309   48055 main.go:141] libmachine: (test-preload-847638) DBG | Closing plugin on server side
	I0914 00:39:58.655302   48055 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 00:39:58.656500   48055 addons.go:510] duration metric: took 1.278379509s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0914 00:39:59.570027   48055 node_ready.go:53] node "test-preload-847638" has status "Ready":"False"
	I0914 00:40:02.068814   48055 node_ready.go:53] node "test-preload-847638" has status "Ready":"False"
	I0914 00:40:04.070599   48055 node_ready.go:53] node "test-preload-847638" has status "Ready":"False"
	I0914 00:40:05.569595   48055 node_ready.go:49] node "test-preload-847638" has status "Ready":"True"
	I0914 00:40:05.569627   48055 node_ready.go:38] duration metric: took 8.00409206s for node "test-preload-847638" to be "Ready" ...
	I0914 00:40:05.569636   48055 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:40:05.574750   48055 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.579762   48055 pod_ready.go:93] pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:05.579808   48055 pod_ready.go:82] duration metric: took 5.030819ms for pod "coredns-6d4b75cb6d-mq5l6" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.579817   48055 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.585607   48055 pod_ready.go:93] pod "etcd-test-preload-847638" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:05.585639   48055 pod_ready.go:82] duration metric: took 5.80772ms for pod "etcd-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.585652   48055 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.590562   48055 pod_ready.go:93] pod "kube-apiserver-test-preload-847638" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:05.590584   48055 pod_ready.go:82] duration metric: took 4.924886ms for pod "kube-apiserver-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.590596   48055 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.594857   48055 pod_ready.go:93] pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:05.594882   48055 pod_ready.go:82] duration metric: took 4.277613ms for pod "kube-controller-manager-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.594899   48055 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8rbgf" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.969726   48055 pod_ready.go:93] pod "kube-proxy-8rbgf" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:05.969757   48055 pod_ready.go:82] duration metric: took 374.849358ms for pod "kube-proxy-8rbgf" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:05.969768   48055 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:07.569358   48055 pod_ready.go:93] pod "kube-scheduler-test-preload-847638" in "kube-system" namespace has status "Ready":"True"
	I0914 00:40:07.569387   48055 pod_ready.go:82] duration metric: took 1.599610503s for pod "kube-scheduler-test-preload-847638" in "kube-system" namespace to be "Ready" ...
	I0914 00:40:07.569400   48055 pod_ready.go:39] duration metric: took 1.999750139s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:40:07.569425   48055 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:40:07.569486   48055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:40:07.584722   48055 api_server.go:72] duration metric: took 10.206706839s to wait for apiserver process to appear ...
	I0914 00:40:07.584749   48055 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:40:07.584773   48055 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0914 00:40:07.590311   48055 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0914 00:40:07.591217   48055 api_server.go:141] control plane version: v1.24.4
	I0914 00:40:07.591235   48055 api_server.go:131] duration metric: took 6.479096ms to wait for apiserver health ...
	I0914 00:40:07.591242   48055 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:40:07.773272   48055 system_pods.go:59] 7 kube-system pods found
	I0914 00:40:07.773317   48055 system_pods.go:61] "coredns-6d4b75cb6d-mq5l6" [17c851e9-373f-4197-b652-d254884017e3] Running
	I0914 00:40:07.773325   48055 system_pods.go:61] "etcd-test-preload-847638" [76abd956-30db-4489-950d-29ba335971a8] Running
	I0914 00:40:07.773332   48055 system_pods.go:61] "kube-apiserver-test-preload-847638" [11f36f49-ff3e-49d0-bb4f-994fab613cab] Running
	I0914 00:40:07.773337   48055 system_pods.go:61] "kube-controller-manager-test-preload-847638" [aa05cc6f-7947-4932-8695-1678659b28b4] Running
	I0914 00:40:07.773341   48055 system_pods.go:61] "kube-proxy-8rbgf" [204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6] Running
	I0914 00:40:07.773346   48055 system_pods.go:61] "kube-scheduler-test-preload-847638" [c56b6211-579c-4f39-9c30-67a0da3a1717] Running
	I0914 00:40:07.773350   48055 system_pods.go:61] "storage-provisioner" [5a3f77b2-7202-448c-8f43-77366fcf4efc] Running
	I0914 00:40:07.773358   48055 system_pods.go:74] duration metric: took 182.109816ms to wait for pod list to return data ...
	I0914 00:40:07.773367   48055 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:40:07.969822   48055 default_sa.go:45] found service account: "default"
	I0914 00:40:07.969851   48055 default_sa.go:55] duration metric: took 196.478536ms for default service account to be created ...
	I0914 00:40:07.969860   48055 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:40:08.173804   48055 system_pods.go:86] 7 kube-system pods found
	I0914 00:40:08.173842   48055 system_pods.go:89] "coredns-6d4b75cb6d-mq5l6" [17c851e9-373f-4197-b652-d254884017e3] Running
	I0914 00:40:08.173850   48055 system_pods.go:89] "etcd-test-preload-847638" [76abd956-30db-4489-950d-29ba335971a8] Running
	I0914 00:40:08.173856   48055 system_pods.go:89] "kube-apiserver-test-preload-847638" [11f36f49-ff3e-49d0-bb4f-994fab613cab] Running
	I0914 00:40:08.173861   48055 system_pods.go:89] "kube-controller-manager-test-preload-847638" [aa05cc6f-7947-4932-8695-1678659b28b4] Running
	I0914 00:40:08.173866   48055 system_pods.go:89] "kube-proxy-8rbgf" [204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6] Running
	I0914 00:40:08.173871   48055 system_pods.go:89] "kube-scheduler-test-preload-847638" [c56b6211-579c-4f39-9c30-67a0da3a1717] Running
	I0914 00:40:08.173876   48055 system_pods.go:89] "storage-provisioner" [5a3f77b2-7202-448c-8f43-77366fcf4efc] Running
	I0914 00:40:08.173885   48055 system_pods.go:126] duration metric: took 204.018687ms to wait for k8s-apps to be running ...
	I0914 00:40:08.173895   48055 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:40:08.173950   48055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:40:08.189158   48055 system_svc.go:56] duration metric: took 15.251799ms WaitForService to wait for kubelet
	I0914 00:40:08.189207   48055 kubeadm.go:582] duration metric: took 10.811176974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:40:08.189226   48055 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:40:08.371148   48055 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:40:08.371175   48055 node_conditions.go:123] node cpu capacity is 2
	I0914 00:40:08.371186   48055 node_conditions.go:105] duration metric: took 181.954535ms to run NodePressure ...
	I0914 00:40:08.371196   48055 start.go:241] waiting for startup goroutines ...
	I0914 00:40:08.371203   48055 start.go:246] waiting for cluster config update ...
	I0914 00:40:08.371212   48055 start.go:255] writing updated cluster config ...
	I0914 00:40:08.371462   48055 ssh_runner.go:195] Run: rm -f paused
	I0914 00:40:08.417969   48055 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0914 00:40:08.419769   48055 out.go:201] 
	W0914 00:40:08.420924   48055 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0914 00:40:08.422401   48055 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0914 00:40:08.423639   48055 out.go:177] * Done! kubectl is now configured to use "test-preload-847638" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.285709387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274409285687818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eaf52cb-68ab-4035-9bb8-d4afd2adc08e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.286286595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=565d4160-e47e-4e88-94a2-edcd96eb4943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.286340658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=565d4160-e47e-4e88-94a2-edcd96eb4943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.286512617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c42148a49ef44e07519fbf1cf3cac65d0142929e6da88eb5ae6af10a5ae293d0,PodSandboxId:71862fce06300f0c56df48f05a810cda5470ca7ec62587075f325a8cf7029117,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726274403775346483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mq5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17c851e9-373f-4197-b652-d254884017e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f81d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81cf433e4f8f8edc3052100b4f409867fd067dce5140085265db647ca34c74b,PodSandboxId:a79c208da2b5a44ea94a1450469a38a70d718de43c1d6711d9d979754d7f5931,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726274396570674206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 5a3f77b2-7202-448c-8f43-77366fcf4efc,},Annotations:map[string]string{io.kubernetes.container.hash: 4638ccbe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5fc37edad5656f591527759a91469b3ade0b9d0c60ee8d0e77fbce4381b159b,PodSandboxId:d9231c85b96a58c13d09cced8acaf774d70d41634866850a0cdfcfccc8f2eae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726274396537580432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204
ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6,},Annotations:map[string]string{io.kubernetes.container.hash: dce9a540,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b416f9dd0c8bcbd97d6cc7f91a0cb2383c098cc6ea34c7920dcaec97fb693b,PodSandboxId:0dd4a1d7fec2a7f70d89f707a3b181158ed2cfbf9a841cdc12eb84fadcec2812,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726274390310798223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986d306e47d5321749ac64dfaac0be96,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8df8a38d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4931fe4a023dd870344f1d04b92ad11e24927a027cd52a537b399473f24feb9e,PodSandboxId:ced5fc0695652979cb8e7464254e23ce672e81144fc1c8fca0e187be67ae1d72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726274390330673118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf13e9c55b62335ca1622b14971fceef,},Annotations:map[
string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcfc90788e8529ffe7a0206c605ce6a3d03aef087e74b529a3f8a9d1351b080,PodSandboxId:6fb59e7967ec5b2587de94281b19a0c84df4e5a6ff9c962fb4fbcc40302945db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726274390237677739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 280c0609270f91147603ac77e1a00fd5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085c2ff9a63b6bb7ca7f65fd23f87c235eeb462437d120ac8fdd044435f4972a,PodSandboxId:ba14ba14ffd933e9132be291abf8082473939d14d3c7ea62cc2a43fa027821e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726274390210446794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70c5ccc196928d881739ac524366b,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=565d4160-e47e-4e88-94a2-edcd96eb4943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.324343919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26db3483-8e78-4376-8c87-91a6ddfa4777 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.324420438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26db3483-8e78-4376-8c87-91a6ddfa4777 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.326154069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dcaa89c-7332-461b-a4b0-88df30f26041 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.326602573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274409326579278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dcaa89c-7332-461b-a4b0-88df30f26041 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.327254466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44795094-ffec-426d-96a2-bbf2a580ff14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.327321898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44795094-ffec-426d-96a2-bbf2a580ff14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.327514100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c42148a49ef44e07519fbf1cf3cac65d0142929e6da88eb5ae6af10a5ae293d0,PodSandboxId:71862fce06300f0c56df48f05a810cda5470ca7ec62587075f325a8cf7029117,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726274403775346483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mq5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17c851e9-373f-4197-b652-d254884017e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f81d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81cf433e4f8f8edc3052100b4f409867fd067dce5140085265db647ca34c74b,PodSandboxId:a79c208da2b5a44ea94a1450469a38a70d718de43c1d6711d9d979754d7f5931,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726274396570674206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 5a3f77b2-7202-448c-8f43-77366fcf4efc,},Annotations:map[string]string{io.kubernetes.container.hash: 4638ccbe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5fc37edad5656f591527759a91469b3ade0b9d0c60ee8d0e77fbce4381b159b,PodSandboxId:d9231c85b96a58c13d09cced8acaf774d70d41634866850a0cdfcfccc8f2eae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726274396537580432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204
ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6,},Annotations:map[string]string{io.kubernetes.container.hash: dce9a540,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b416f9dd0c8bcbd97d6cc7f91a0cb2383c098cc6ea34c7920dcaec97fb693b,PodSandboxId:0dd4a1d7fec2a7f70d89f707a3b181158ed2cfbf9a841cdc12eb84fadcec2812,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726274390310798223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986d306e47d5321749ac64dfaac0be96,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8df8a38d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4931fe4a023dd870344f1d04b92ad11e24927a027cd52a537b399473f24feb9e,PodSandboxId:ced5fc0695652979cb8e7464254e23ce672e81144fc1c8fca0e187be67ae1d72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726274390330673118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf13e9c55b62335ca1622b14971fceef,},Annotations:map[
string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcfc90788e8529ffe7a0206c605ce6a3d03aef087e74b529a3f8a9d1351b080,PodSandboxId:6fb59e7967ec5b2587de94281b19a0c84df4e5a6ff9c962fb4fbcc40302945db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726274390237677739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 280c0609270f91147603ac77e1a00fd5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085c2ff9a63b6bb7ca7f65fd23f87c235eeb462437d120ac8fdd044435f4972a,PodSandboxId:ba14ba14ffd933e9132be291abf8082473939d14d3c7ea62cc2a43fa027821e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726274390210446794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70c5ccc196928d881739ac524366b,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44795094-ffec-426d-96a2-bbf2a580ff14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.363226243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f953a4b3-5ccf-466e-8e26-512362eaf328 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.363318437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f953a4b3-5ccf-466e-8e26-512362eaf328 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.364875641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3184422b-eb0c-4e5a-a708-7606b77b1bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.365379278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274409365355308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3184422b-eb0c-4e5a-a708-7606b77b1bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.366083451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=061e242e-3c13-4c75-ae63-a4ea1484ba25 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.366150098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=061e242e-3c13-4c75-ae63-a4ea1484ba25 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.366307660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c42148a49ef44e07519fbf1cf3cac65d0142929e6da88eb5ae6af10a5ae293d0,PodSandboxId:71862fce06300f0c56df48f05a810cda5470ca7ec62587075f325a8cf7029117,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726274403775346483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mq5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17c851e9-373f-4197-b652-d254884017e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f81d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81cf433e4f8f8edc3052100b4f409867fd067dce5140085265db647ca34c74b,PodSandboxId:a79c208da2b5a44ea94a1450469a38a70d718de43c1d6711d9d979754d7f5931,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726274396570674206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 5a3f77b2-7202-448c-8f43-77366fcf4efc,},Annotations:map[string]string{io.kubernetes.container.hash: 4638ccbe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5fc37edad5656f591527759a91469b3ade0b9d0c60ee8d0e77fbce4381b159b,PodSandboxId:d9231c85b96a58c13d09cced8acaf774d70d41634866850a0cdfcfccc8f2eae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726274396537580432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204
ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6,},Annotations:map[string]string{io.kubernetes.container.hash: dce9a540,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b416f9dd0c8bcbd97d6cc7f91a0cb2383c098cc6ea34c7920dcaec97fb693b,PodSandboxId:0dd4a1d7fec2a7f70d89f707a3b181158ed2cfbf9a841cdc12eb84fadcec2812,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726274390310798223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986d306e47d5321749ac64dfaac0be96,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8df8a38d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4931fe4a023dd870344f1d04b92ad11e24927a027cd52a537b399473f24feb9e,PodSandboxId:ced5fc0695652979cb8e7464254e23ce672e81144fc1c8fca0e187be67ae1d72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726274390330673118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf13e9c55b62335ca1622b14971fceef,},Annotations:map[
string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcfc90788e8529ffe7a0206c605ce6a3d03aef087e74b529a3f8a9d1351b080,PodSandboxId:6fb59e7967ec5b2587de94281b19a0c84df4e5a6ff9c962fb4fbcc40302945db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726274390237677739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 280c0609270f91147603ac77e1a00fd5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085c2ff9a63b6bb7ca7f65fd23f87c235eeb462437d120ac8fdd044435f4972a,PodSandboxId:ba14ba14ffd933e9132be291abf8082473939d14d3c7ea62cc2a43fa027821e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726274390210446794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70c5ccc196928d881739ac524366b,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=061e242e-3c13-4c75-ae63-a4ea1484ba25 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.397920047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3dc6b4b3-5f0f-447d-ae1d-af37ab7059b7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.398004824Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3dc6b4b3-5f0f-447d-ae1d-af37ab7059b7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.399390847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=548fa7f1-bba4-46b1-8ace-df2ba336cdba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.400327139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274409400295920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=548fa7f1-bba4-46b1-8ace-df2ba336cdba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.400944557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a831c170-5e71-436a-b82c-9b3db24a3041 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.401001859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a831c170-5e71-436a-b82c-9b3db24a3041 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:40:09 test-preload-847638 crio[657]: time="2024-09-14 00:40:09.401173919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c42148a49ef44e07519fbf1cf3cac65d0142929e6da88eb5ae6af10a5ae293d0,PodSandboxId:71862fce06300f0c56df48f05a810cda5470ca7ec62587075f325a8cf7029117,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726274403775346483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mq5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17c851e9-373f-4197-b652-d254884017e3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f81d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81cf433e4f8f8edc3052100b4f409867fd067dce5140085265db647ca34c74b,PodSandboxId:a79c208da2b5a44ea94a1450469a38a70d718de43c1d6711d9d979754d7f5931,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726274396570674206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 5a3f77b2-7202-448c-8f43-77366fcf4efc,},Annotations:map[string]string{io.kubernetes.container.hash: 4638ccbe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5fc37edad5656f591527759a91469b3ade0b9d0c60ee8d0e77fbce4381b159b,PodSandboxId:d9231c85b96a58c13d09cced8acaf774d70d41634866850a0cdfcfccc8f2eae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726274396537580432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204
ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6,},Annotations:map[string]string{io.kubernetes.container.hash: dce9a540,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b416f9dd0c8bcbd97d6cc7f91a0cb2383c098cc6ea34c7920dcaec97fb693b,PodSandboxId:0dd4a1d7fec2a7f70d89f707a3b181158ed2cfbf9a841cdc12eb84fadcec2812,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726274390310798223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986d306e47d5321749ac64dfaac0be96,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8df8a38d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4931fe4a023dd870344f1d04b92ad11e24927a027cd52a537b399473f24feb9e,PodSandboxId:ced5fc0695652979cb8e7464254e23ce672e81144fc1c8fca0e187be67ae1d72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726274390330673118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf13e9c55b62335ca1622b14971fceef,},Annotations:map[
string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcfc90788e8529ffe7a0206c605ce6a3d03aef087e74b529a3f8a9d1351b080,PodSandboxId:6fb59e7967ec5b2587de94281b19a0c84df4e5a6ff9c962fb4fbcc40302945db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726274390237677739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 280c0609270f91147603ac77e1a00fd5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085c2ff9a63b6bb7ca7f65fd23f87c235eeb462437d120ac8fdd044435f4972a,PodSandboxId:ba14ba14ffd933e9132be291abf8082473939d14d3c7ea62cc2a43fa027821e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726274390210446794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-847638,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70c5ccc196928d881739ac524366b,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a831c170-5e71-436a-b82c-9b3db24a3041 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c42148a49ef44       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   71862fce06300       coredns-6d4b75cb6d-mq5l6
	b81cf433e4f8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   a79c208da2b5a       storage-provisioner
	c5fc37edad565       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   d9231c85b96a5       kube-proxy-8rbgf
	4931fe4a023dd       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   ced5fc0695652       kube-apiserver-test-preload-847638
	54b416f9dd0c8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   0dd4a1d7fec2a       etcd-test-preload-847638
	fdcfc90788e85       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   6fb59e7967ec5       kube-controller-manager-test-preload-847638
	085c2ff9a63b6       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   ba14ba14ffd93       kube-scheduler-test-preload-847638
	
	
	==> coredns [c42148a49ef44e07519fbf1cf3cac65d0142929e6da88eb5ae6af10a5ae293d0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49056 - 17840 "HINFO IN 2019797166240669568.6030249833021979181. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012220412s
	
	
	==> describe nodes <==
	Name:               test-preload-847638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-847638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=test-preload-847638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_37_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:37:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-847638
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:40:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:40:05 +0000   Sat, 14 Sep 2024 00:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:40:05 +0000   Sat, 14 Sep 2024 00:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:40:05 +0000   Sat, 14 Sep 2024 00:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:40:05 +0000   Sat, 14 Sep 2024 00:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    test-preload-847638
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 374d87ee6cbc430ea9cfed32303366df
	  System UUID:                374d87ee-6cbc-430e-a9cf-ed32303366df
	  Boot ID:                    3969670d-3c86-4853-b681-0880cebacef9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-mq5l6                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     118s
	  kube-system                 etcd-test-preload-847638                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m10s
	  kube-system                 kube-apiserver-test-preload-847638             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-test-preload-847638    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-8rbgf                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-test-preload-847638             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  Starting                 116s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m19s (x5 over 2m19s)  kubelet          Node test-preload-847638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x5 over 2m19s)  kubelet          Node test-preload-847638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x4 over 2m19s)  kubelet          Node test-preload-847638 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m11s                  kubelet          Node test-preload-847638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s                  kubelet          Node test-preload-847638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s                  kubelet          Node test-preload-847638 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m1s                   kubelet          Node test-preload-847638 status is now: NodeReady
	  Normal  RegisteredNode           119s                   node-controller  Node test-preload-847638 event: Registered Node test-preload-847638 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)      kubelet          Node test-preload-847638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)      kubelet          Node test-preload-847638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)      kubelet          Node test-preload-847638 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                     node-controller  Node test-preload-847638 event: Registered Node test-preload-847638 in Controller
	
	
	==> dmesg <==
	[Sep14 00:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785014] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954675] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.533414] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.057862] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.057369] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062893] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.155668] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.145434] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.263128] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[ +12.461171] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.056317] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.614205] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +7.175809] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.937846] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[Sep14 00:40] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [54b416f9dd0c8bcbd97d6cc7f91a0cb2383c098cc6ea34c7920dcaec97fb693b] <==
	{"level":"info","ts":"2024-09-14T00:39:50.741Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"28dd8e6bbca035f5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-14T00:39:50.742Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-14T00:39:50.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-09-14T00:39:50.746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-09-14T00:39:50.747Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:39:50.747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:39:50.770Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:39:50.770Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:39:50.770Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-14T00:39:50.770Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-14T00:39:50.770Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:39:52.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-14T00:39:52.299Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:test-preload-847638 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:39:52.299Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:39:52.300Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-09-14T00:39:52.300Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:39:52.301Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:39:52.302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:39:52.302Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:40:09 up 0 min,  0 users,  load average: 1.56, 0.43, 0.15
	Linux test-preload-847638 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4931fe4a023dd870344f1d04b92ad11e24927a027cd52a537b399473f24feb9e] <==
	I0914 00:39:54.752011       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 00:39:54.752048       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0914 00:39:54.693998       1 controller.go:83] Starting OpenAPI AggregationController
	I0914 00:39:54.718335       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0914 00:39:54.754103       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0914 00:39:54.717903       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 00:39:54.820276       1 apf_controller.go:322] Running API Priority and Fairness config worker
	E0914 00:39:54.822438       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0914 00:39:54.856946       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0914 00:39:54.857525       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0914 00:39:54.882493       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:39:54.896274       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0914 00:39:54.901880       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:39:54.902026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:39:54.901959       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 00:39:55.352937       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 00:39:55.700570       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 00:39:56.256503       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0914 00:39:56.270574       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0914 00:39:56.309263       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0914 00:39:56.324048       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:39:56.346549       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 00:39:56.863227       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0914 00:40:07.648235       1 controller.go:611] quota admission added evaluator for: endpoints
	I0914 00:40:07.720482       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fdcfc90788e8529ffe7a0206c605ce6a3d03aef087e74b529a3f8a9d1351b080] <==
	I0914 00:40:07.636809       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0914 00:40:07.642901       1 shared_informer.go:262] Caches are synced for PVC protection
	I0914 00:40:07.666237       1 shared_informer.go:262] Caches are synced for daemon sets
	I0914 00:40:07.674404       1 shared_informer.go:262] Caches are synced for persistent volume
	I0914 00:40:07.675656       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0914 00:40:07.694273       1 shared_informer.go:262] Caches are synced for node
	I0914 00:40:07.694420       1 range_allocator.go:173] Starting range CIDR allocator
	I0914 00:40:07.694495       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0914 00:40:07.694523       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0914 00:40:07.696911       1 shared_informer.go:262] Caches are synced for GC
	I0914 00:40:07.696960       1 shared_informer.go:262] Caches are synced for taint
	I0914 00:40:07.697212       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0914 00:40:07.697340       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-847638. Assuming now as a timestamp.
	I0914 00:40:07.697388       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0914 00:40:07.697444       1 event.go:294] "Event occurred" object="test-preload-847638" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-847638 event: Registered Node test-preload-847638 in Controller"
	I0914 00:40:07.697345       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0914 00:40:07.710229       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0914 00:40:07.715041       1 shared_informer.go:262] Caches are synced for disruption
	I0914 00:40:07.715099       1 disruption.go:371] Sending events to api server.
	I0914 00:40:07.726284       1 shared_informer.go:262] Caches are synced for attach detach
	I0914 00:40:07.798355       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 00:40:07.820258       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 00:40:08.265012       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 00:40:08.280576       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 00:40:08.280627       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [c5fc37edad5656f591527759a91469b3ade0b9d0c60ee8d0e77fbce4381b159b] <==
	I0914 00:39:56.812096       1 node.go:163] Successfully retrieved node IP: 192.168.39.203
	I0914 00:39:56.812199       1 server_others.go:138] "Detected node IP" address="192.168.39.203"
	I0914 00:39:56.812285       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0914 00:39:56.844922       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0914 00:39:56.844951       1 server_others.go:206] "Using iptables Proxier"
	I0914 00:39:56.845354       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0914 00:39:56.846287       1 server.go:661] "Version info" version="v1.24.4"
	I0914 00:39:56.846314       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:39:56.847749       1 config.go:317] "Starting service config controller"
	I0914 00:39:56.847791       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0914 00:39:56.847815       1 config.go:226] "Starting endpoint slice config controller"
	I0914 00:39:56.847818       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0914 00:39:56.850316       1 config.go:444] "Starting node config controller"
	I0914 00:39:56.852683       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0914 00:39:56.948732       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0914 00:39:56.948819       1 shared_informer.go:262] Caches are synced for service config
	I0914 00:39:56.960962       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [085c2ff9a63b6bb7ca7f65fd23f87c235eeb462437d120ac8fdd044435f4972a] <==
	I0914 00:39:50.974562       1 serving.go:348] Generated self-signed cert in-memory
	W0914 00:39:54.766697       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 00:39:54.767121       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 00:39:54.767214       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 00:39:54.767343       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 00:39:54.841475       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0914 00:39:54.842055       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:39:54.845175       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 00:39:54.845670       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:39:54.845994       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:39:54.845703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 00:39:54.946497       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:39:54 test-preload-847638 kubelet[1114]: I0914 00:39:54.868337    1114 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-847638"
	Sep 14 00:39:54 test-preload-847638 kubelet[1114]: I0914 00:39:54.873070    1114 setters.go:532] "Node became not ready" node="test-preload-847638" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-14 00:39:54.872964861 +0000 UTC m=+5.474496543 LastTransitionTime:2024-09-14 00:39:54.872964861 +0000 UTC m=+5.474496543 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.519039    1114 apiserver.go:52] "Watching apiserver"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.524496    1114 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.524706    1114 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.524896    1114 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: E0914 00:39:55.526160    1114 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-mq5l6" podUID=17c851e9-373f-4197-b652-d254884017e3
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589024    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6-lib-modules\") pod \"kube-proxy-8rbgf\" (UID: \"204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6\") " pod="kube-system/kube-proxy-8rbgf"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589399    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume\") pod \"coredns-6d4b75cb6d-mq5l6\" (UID: \"17c851e9-373f-4197-b652-d254884017e3\") " pod="kube-system/coredns-6d4b75cb6d-mq5l6"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589462    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5a3f77b2-7202-448c-8f43-77366fcf4efc-tmp\") pod \"storage-provisioner\" (UID: \"5a3f77b2-7202-448c-8f43-77366fcf4efc\") " pod="kube-system/storage-provisioner"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589524    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6-kube-proxy\") pod \"kube-proxy-8rbgf\" (UID: \"204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6\") " pod="kube-system/kube-proxy-8rbgf"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589619    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6-xtables-lock\") pod \"kube-proxy-8rbgf\" (UID: \"204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6\") " pod="kube-system/kube-proxy-8rbgf"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589684    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b66wq\" (UniqueName: \"kubernetes.io/projected/17c851e9-373f-4197-b652-d254884017e3-kube-api-access-b66wq\") pod \"coredns-6d4b75cb6d-mq5l6\" (UID: \"17c851e9-373f-4197-b652-d254884017e3\") " pod="kube-system/coredns-6d4b75cb6d-mq5l6"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589733    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knsp9\" (UniqueName: \"kubernetes.io/projected/204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6-kube-api-access-knsp9\") pod \"kube-proxy-8rbgf\" (UID: \"204ac0f6-51f2-4c4a-89ea-ac3d7b2cb3b6\") " pod="kube-system/kube-proxy-8rbgf"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589764    1114 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k47c\" (UniqueName: \"kubernetes.io/projected/5a3f77b2-7202-448c-8f43-77366fcf4efc-kube-api-access-8k47c\") pod \"storage-provisioner\" (UID: \"5a3f77b2-7202-448c-8f43-77366fcf4efc\") " pod="kube-system/storage-provisioner"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: I0914 00:39:55.589792    1114 reconciler.go:159] "Reconciler: start to sync state"
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: E0914 00:39:55.693956    1114 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 00:39:55 test-preload-847638 kubelet[1114]: E0914 00:39:55.694074    1114 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume podName:17c851e9-373f-4197-b652-d254884017e3 nodeName:}" failed. No retries permitted until 2024-09-14 00:39:56.194032202 +0000 UTC m=+6.795563896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume") pod "coredns-6d4b75cb6d-mq5l6" (UID: "17c851e9-373f-4197-b652-d254884017e3") : object "kube-system"/"coredns" not registered
	Sep 14 00:39:56 test-preload-847638 kubelet[1114]: E0914 00:39:56.197126    1114 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 00:39:56 test-preload-847638 kubelet[1114]: E0914 00:39:56.197210    1114 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume podName:17c851e9-373f-4197-b652-d254884017e3 nodeName:}" failed. No retries permitted until 2024-09-14 00:39:57.197194401 +0000 UTC m=+7.798726088 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume") pod "coredns-6d4b75cb6d-mq5l6" (UID: "17c851e9-373f-4197-b652-d254884017e3") : object "kube-system"/"coredns" not registered
	Sep 14 00:39:57 test-preload-847638 kubelet[1114]: E0914 00:39:57.205694    1114 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 00:39:57 test-preload-847638 kubelet[1114]: E0914 00:39:57.206247    1114 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume podName:17c851e9-373f-4197-b652-d254884017e3 nodeName:}" failed. No retries permitted until 2024-09-14 00:39:59.206221613 +0000 UTC m=+9.807753311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume") pod "coredns-6d4b75cb6d-mq5l6" (UID: "17c851e9-373f-4197-b652-d254884017e3") : object "kube-system"/"coredns" not registered
	Sep 14 00:39:57 test-preload-847638 kubelet[1114]: E0914 00:39:57.631452    1114 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-mq5l6" podUID=17c851e9-373f-4197-b652-d254884017e3
	Sep 14 00:39:59 test-preload-847638 kubelet[1114]: E0914 00:39:59.218407    1114 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 00:39:59 test-preload-847638 kubelet[1114]: E0914 00:39:59.219081    1114 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume podName:17c851e9-373f-4197-b652-d254884017e3 nodeName:}" failed. No retries permitted until 2024-09-14 00:40:03.219003533 +0000 UTC m=+13.820535235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17c851e9-373f-4197-b652-d254884017e3-config-volume") pod "coredns-6d4b75cb6d-mq5l6" (UID: "17c851e9-373f-4197-b652-d254884017e3") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [b81cf433e4f8f8edc3052100b4f409867fd067dce5140085265db647ca34c74b] <==
	I0914 00:39:56.675302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-847638 -n test-preload-847638
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-847638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-847638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-847638
--- FAIL: TestPreload (203.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m44.692677284s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-271886] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-271886" primary control-plane node in "kubernetes-upgrade-271886" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:44:59.218924   54475 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:44:59.219050   54475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:44:59.219058   54475 out.go:358] Setting ErrFile to fd 2...
	I0914 00:44:59.219063   54475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:44:59.219240   54475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:44:59.219866   54475 out.go:352] Setting JSON to false
	I0914 00:44:59.220862   54475 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5245,"bootTime":1726269454,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:44:59.220963   54475 start.go:139] virtualization: kvm guest
	I0914 00:44:59.223069   54475 out.go:177] * [kubernetes-upgrade-271886] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:44:59.224343   54475 notify.go:220] Checking for updates...
	I0914 00:44:59.224352   54475 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:44:59.225733   54475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:44:59.227301   54475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:44:59.228538   54475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:44:59.229605   54475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:44:59.230696   54475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:44:59.232246   54475 config.go:182] Loaded profile config "NoKubernetes-444049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0914 00:44:59.232352   54475 config.go:182] Loaded profile config "cert-expiration-554954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:44:59.232459   54475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:44:59.271829   54475 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:44:59.273276   54475 start.go:297] selected driver: kvm2
	I0914 00:44:59.273297   54475 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:44:59.273309   54475 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:44:59.274038   54475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:44:59.274133   54475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:44:59.289975   54475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:44:59.290035   54475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:44:59.290306   54475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 00:44:59.290335   54475 cni.go:84] Creating CNI manager for ""
	I0914 00:44:59.290387   54475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:44:59.290402   54475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 00:44:59.290464   54475 start.go:340] cluster config:
	{Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:44:59.290614   54475 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:44:59.293488   54475 out.go:177] * Starting "kubernetes-upgrade-271886" primary control-plane node in "kubernetes-upgrade-271886" cluster
	I0914 00:44:59.294967   54475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:44:59.295033   54475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 00:44:59.295047   54475 cache.go:56] Caching tarball of preloaded images
	I0914 00:44:59.295154   54475 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:44:59.295167   54475 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 00:44:59.295286   54475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/config.json ...
	I0914 00:44:59.295318   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/config.json: {Name:mkabc07d5d6b8b82bb02764fd34d4c2fd9bcbe76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:44:59.295459   54475 start.go:360] acquireMachinesLock for kubernetes-upgrade-271886: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:45:15.212697   54475 start.go:364] duration metric: took 15.917191285s to acquireMachinesLock for "kubernetes-upgrade-271886"
	I0914 00:45:15.212781   54475 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:45:15.212924   54475 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 00:45:15.214822   54475 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 00:45:15.215055   54475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:45:15.215101   54475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:45:15.231736   54475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 00:45:15.232227   54475 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:45:15.232840   54475 main.go:141] libmachine: Using API Version  1
	I0914 00:45:15.232865   54475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:45:15.233163   54475 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:45:15.233331   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetMachineName
	I0914 00:45:15.233529   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:15.233671   54475 start.go:159] libmachine.API.Create for "kubernetes-upgrade-271886" (driver="kvm2")
	I0914 00:45:15.233699   54475 client.go:168] LocalClient.Create starting
	I0914 00:45:15.233735   54475 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0914 00:45:15.233780   54475 main.go:141] libmachine: Decoding PEM data...
	I0914 00:45:15.233800   54475 main.go:141] libmachine: Parsing certificate...
	I0914 00:45:15.233859   54475 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0914 00:45:15.233882   54475 main.go:141] libmachine: Decoding PEM data...
	I0914 00:45:15.233912   54475 main.go:141] libmachine: Parsing certificate...
	I0914 00:45:15.233935   54475 main.go:141] libmachine: Running pre-create checks...
	I0914 00:45:15.233946   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .PreCreateCheck
	I0914 00:45:15.234307   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetConfigRaw
	I0914 00:45:15.234775   54475 main.go:141] libmachine: Creating machine...
	I0914 00:45:15.234794   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .Create
	I0914 00:45:15.234959   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Creating KVM machine...
	I0914 00:45:15.236259   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found existing default KVM network
	I0914 00:45:15.237450   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.237265   54635 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:07:3e} reservation:<nil>}
	I0914 00:45:15.238405   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.238317   54635 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:28:53} reservation:<nil>}
	I0914 00:45:15.239421   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.239347   54635 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034a4d0}
	I0914 00:45:15.239464   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | created network xml: 
	I0914 00:45:15.239491   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | <network>
	I0914 00:45:15.239517   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   <name>mk-kubernetes-upgrade-271886</name>
	I0914 00:45:15.239560   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   <dns enable='no'/>
	I0914 00:45:15.239569   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   
	I0914 00:45:15.239576   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0914 00:45:15.239587   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |     <dhcp>
	I0914 00:45:15.239603   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0914 00:45:15.239615   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |     </dhcp>
	I0914 00:45:15.239624   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   </ip>
	I0914 00:45:15.239658   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG |   
	I0914 00:45:15.239695   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | </network>
	I0914 00:45:15.239711   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | 
	I0914 00:45:15.245572   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | trying to create private KVM network mk-kubernetes-upgrade-271886 192.168.61.0/24...
	I0914 00:45:15.334782   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886 ...
	I0914 00:45:15.334817   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | private KVM network mk-kubernetes-upgrade-271886 192.168.61.0/24 created
	I0914 00:45:15.334829   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0914 00:45:15.334847   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.334729   54635 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:45:15.334914   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0914 00:45:15.575869   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.575734   54635 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa...
	I0914 00:45:15.726329   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.726160   54635 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/kubernetes-upgrade-271886.rawdisk...
	I0914 00:45:15.726364   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Writing magic tar header
	I0914 00:45:15.726387   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Writing SSH key tar header
	I0914 00:45:15.726400   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:15.726302   54635 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886 ...
	I0914 00:45:15.726425   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886
	I0914 00:45:15.726527   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0914 00:45:15.726559   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:45:15.726572   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886 (perms=drwx------)
	I0914 00:45:15.726595   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0914 00:45:15.726608   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0914 00:45:15.726616   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 00:45:15.726627   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home/jenkins
	I0914 00:45:15.726635   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Checking permissions on dir: /home
	I0914 00:45:15.726651   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Skipping /home - not owner
	I0914 00:45:15.726667   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0914 00:45:15.726680   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0914 00:45:15.726695   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 00:45:15.726709   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 00:45:15.726719   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Creating domain...
	I0914 00:45:15.728003   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) define libvirt domain using xml: 
	I0914 00:45:15.728028   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) <domain type='kvm'>
	I0914 00:45:15.728039   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <name>kubernetes-upgrade-271886</name>
	I0914 00:45:15.728049   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <memory unit='MiB'>2200</memory>
	I0914 00:45:15.728057   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <vcpu>2</vcpu>
	I0914 00:45:15.728069   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <features>
	I0914 00:45:15.728080   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <acpi/>
	I0914 00:45:15.728094   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <apic/>
	I0914 00:45:15.728102   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <pae/>
	I0914 00:45:15.728107   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     
	I0914 00:45:15.728115   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   </features>
	I0914 00:45:15.728119   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <cpu mode='host-passthrough'>
	I0914 00:45:15.728123   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   
	I0914 00:45:15.728128   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   </cpu>
	I0914 00:45:15.728133   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <os>
	I0914 00:45:15.728146   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <type>hvm</type>
	I0914 00:45:15.728158   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <boot dev='cdrom'/>
	I0914 00:45:15.728164   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <boot dev='hd'/>
	I0914 00:45:15.728185   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <bootmenu enable='no'/>
	I0914 00:45:15.728203   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   </os>
	I0914 00:45:15.728214   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   <devices>
	I0914 00:45:15.728252   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <disk type='file' device='cdrom'>
	I0914 00:45:15.728386   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/boot2docker.iso'/>
	I0914 00:45:15.728427   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <target dev='hdc' bus='scsi'/>
	I0914 00:45:15.728445   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <readonly/>
	I0914 00:45:15.728454   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </disk>
	I0914 00:45:15.728463   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <disk type='file' device='disk'>
	I0914 00:45:15.728478   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 00:45:15.728496   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/kubernetes-upgrade-271886.rawdisk'/>
	I0914 00:45:15.728514   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <target dev='hda' bus='virtio'/>
	I0914 00:45:15.728532   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </disk>
	I0914 00:45:15.728551   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <interface type='network'>
	I0914 00:45:15.728566   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <source network='mk-kubernetes-upgrade-271886'/>
	I0914 00:45:15.728575   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <model type='virtio'/>
	I0914 00:45:15.728582   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </interface>
	I0914 00:45:15.728591   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <interface type='network'>
	I0914 00:45:15.728599   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <source network='default'/>
	I0914 00:45:15.728611   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <model type='virtio'/>
	I0914 00:45:15.728641   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </interface>
	I0914 00:45:15.728677   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <serial type='pty'>
	I0914 00:45:15.728708   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <target port='0'/>
	I0914 00:45:15.728719   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </serial>
	I0914 00:45:15.728728   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <console type='pty'>
	I0914 00:45:15.728745   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <target type='serial' port='0'/>
	I0914 00:45:15.728757   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </console>
	I0914 00:45:15.728765   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     <rng model='virtio'>
	I0914 00:45:15.728775   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)       <backend model='random'>/dev/random</backend>
	I0914 00:45:15.728785   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     </rng>
	I0914 00:45:15.728794   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     
	I0914 00:45:15.728804   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)     
	I0914 00:45:15.728819   54475 main.go:141] libmachine: (kubernetes-upgrade-271886)   </devices>
	I0914 00:45:15.728836   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) </domain>
	I0914 00:45:15.728854   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) 
	I0914 00:45:15.733936   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:5e:fe:3f in network default
	I0914 00:45:15.734680   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:15.734703   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Ensuring networks are active...
	I0914 00:45:15.735813   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Ensuring network default is active
	I0914 00:45:15.736192   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Ensuring network mk-kubernetes-upgrade-271886 is active
	I0914 00:45:15.736825   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Getting domain xml...
	I0914 00:45:15.737719   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Creating domain...
	I0914 00:45:17.053164   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Waiting to get IP...
	I0914 00:45:17.054149   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.054778   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.054841   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:17.054755   54635 retry.go:31] will retry after 197.914427ms: waiting for machine to come up
	I0914 00:45:17.254223   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.254783   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.254815   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:17.254730   54635 retry.go:31] will retry after 314.367746ms: waiting for machine to come up
	I0914 00:45:17.570131   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.570685   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:17.570717   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:17.570645   54635 retry.go:31] will retry after 460.109713ms: waiting for machine to come up
	I0914 00:45:18.032319   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:18.032737   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:18.032768   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:18.032704   54635 retry.go:31] will retry after 518.83352ms: waiting for machine to come up
	I0914 00:45:18.553485   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:18.553964   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:18.553990   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:18.553911   54635 retry.go:31] will retry after 482.801959ms: waiting for machine to come up
	I0914 00:45:19.038617   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:19.038990   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:19.039007   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:19.038981   54635 retry.go:31] will retry after 787.131104ms: waiting for machine to come up
	I0914 00:45:19.828197   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:19.828778   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:19.828806   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:19.828727   54635 retry.go:31] will retry after 943.997464ms: waiting for machine to come up
	I0914 00:45:20.774171   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:20.774625   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:20.774647   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:20.774562   54635 retry.go:31] will retry after 1.127576678s: waiting for machine to come up
	I0914 00:45:21.904198   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:21.904623   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:21.904640   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:21.904594   54635 retry.go:31] will retry after 1.603406751s: waiting for machine to come up
	I0914 00:45:23.509359   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:23.509857   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:23.509879   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:23.509819   54635 retry.go:31] will retry after 1.448336416s: waiting for machine to come up
	I0914 00:45:24.960021   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:24.960846   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:24.960880   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:24.960756   54635 retry.go:31] will retry after 2.474524201s: waiting for machine to come up
	I0914 00:45:27.437918   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:27.438296   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:27.438324   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:27.438280   54635 retry.go:31] will retry after 2.839025354s: waiting for machine to come up
	I0914 00:45:30.278736   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:30.279182   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:30.279203   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:30.279125   54635 retry.go:31] will retry after 4.041576134s: waiting for machine to come up
	I0914 00:45:34.322025   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:34.322489   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find current IP address of domain kubernetes-upgrade-271886 in network mk-kubernetes-upgrade-271886
	I0914 00:45:34.322507   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | I0914 00:45:34.322444   54635 retry.go:31] will retry after 3.746496576s: waiting for machine to come up
	I0914 00:45:38.072647   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.073038   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Found IP for machine: 192.168.61.53
	I0914 00:45:38.073086   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has current primary IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.073093   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Reserving static IP address...
	I0914 00:45:38.073489   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-271886", mac: "52:54:00:de:bd:e8", ip: "192.168.61.53"} in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.150517   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Reserved static IP address: 192.168.61.53
	I0914 00:45:38.150561   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Getting to WaitForSSH function...
	I0914 00:45:38.150572   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Waiting for SSH to be available...
	I0914 00:45:38.153133   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.153552   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.153593   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.153669   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Using SSH client type: external
	I0914 00:45:38.153693   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa (-rw-------)
	I0914 00:45:38.153735   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:45:38.153744   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | About to run SSH command:
	I0914 00:45:38.153756   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | exit 0
	I0914 00:45:38.279779   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | SSH cmd err, output: <nil>: 
	I0914 00:45:38.280147   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) KVM machine creation complete!
	I0914 00:45:38.280429   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetConfigRaw
	I0914 00:45:38.280980   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:38.281154   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:38.281319   54475 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 00:45:38.281330   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetState
	I0914 00:45:38.282546   54475 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 00:45:38.282557   54475 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 00:45:38.282563   54475 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 00:45:38.282569   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:38.284883   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.285238   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.285265   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.285401   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:38.285552   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.285675   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.285827   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:38.285941   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:38.286115   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:38.286124   54475 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 00:45:38.394797   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:45:38.394826   54475 main.go:141] libmachine: Detecting the provisioner...
	I0914 00:45:38.394838   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:38.397694   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.398040   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.398072   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.398176   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:38.398377   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.398526   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.398662   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:38.398785   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:38.399007   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:38.399023   54475 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 00:45:38.508278   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 00:45:38.508377   54475 main.go:141] libmachine: found compatible host: buildroot
	I0914 00:45:38.508393   54475 main.go:141] libmachine: Provisioning with buildroot...
	I0914 00:45:38.508404   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetMachineName
	I0914 00:45:38.508624   54475 buildroot.go:166] provisioning hostname "kubernetes-upgrade-271886"
	I0914 00:45:38.508648   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetMachineName
	I0914 00:45:38.508806   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:38.511468   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.511841   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.511882   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.512071   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:38.512267   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.512448   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.512576   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:38.512708   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:38.512893   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:38.512905   54475 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-271886 && echo "kubernetes-upgrade-271886" | sudo tee /etc/hostname
	I0914 00:45:38.637486   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-271886
	
	I0914 00:45:38.637515   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:38.640219   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.640538   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.640573   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.640731   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:38.640921   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.641081   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:38.641300   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:38.641464   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:38.641641   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:38.641657   54475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-271886' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-271886/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-271886' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:45:38.760314   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:45:38.760340   54475 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:45:38.760371   54475 buildroot.go:174] setting up certificates
	I0914 00:45:38.760381   54475 provision.go:84] configureAuth start
	I0914 00:45:38.760397   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetMachineName
	I0914 00:45:38.760724   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetIP
	I0914 00:45:38.763437   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.763846   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.763873   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.763952   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:38.766449   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.766770   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:38.766797   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:38.766922   54475 provision.go:143] copyHostCerts
	I0914 00:45:38.766969   54475 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:45:38.766981   54475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:45:38.767047   54475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:45:38.767173   54475 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:45:38.767185   54475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:45:38.767217   54475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:45:38.767301   54475 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:45:38.767310   54475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:45:38.767341   54475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:45:38.767424   54475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-271886 san=[127.0.0.1 192.168.61.53 kubernetes-upgrade-271886 localhost minikube]
	I0914 00:45:39.423414   54475 provision.go:177] copyRemoteCerts
	I0914 00:45:39.423478   54475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:45:39.423512   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:39.426593   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.426993   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.427025   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.427252   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:39.427455   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.427624   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:39.427747   54475 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa Username:docker}
	I0914 00:45:39.513747   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:45:39.537692   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 00:45:39.560633   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:45:39.582648   54475 provision.go:87] duration metric: took 822.251776ms to configureAuth
	I0914 00:45:39.582677   54475 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:45:39.582849   54475 config.go:182] Loaded profile config "kubernetes-upgrade-271886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 00:45:39.582962   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:39.585538   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.585861   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.585896   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.586022   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:39.586211   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.586375   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.586487   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:39.586608   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:39.586781   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:39.586801   54475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:45:39.808532   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:45:39.808554   54475 main.go:141] libmachine: Checking connection to Docker...
	I0914 00:45:39.808562   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetURL
	I0914 00:45:39.810048   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | Using libvirt version 6000000
	I0914 00:45:39.812628   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.812962   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.812989   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.813188   54475 main.go:141] libmachine: Docker is up and running!
	I0914 00:45:39.813209   54475 main.go:141] libmachine: Reticulating splines...
	I0914 00:45:39.813218   54475 client.go:171] duration metric: took 24.579508325s to LocalClient.Create
	I0914 00:45:39.813244   54475 start.go:167] duration metric: took 24.579574244s to libmachine.API.Create "kubernetes-upgrade-271886"
	I0914 00:45:39.813256   54475 start.go:293] postStartSetup for "kubernetes-upgrade-271886" (driver="kvm2")
	I0914 00:45:39.813267   54475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:45:39.813292   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:39.813600   54475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:45:39.813632   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:39.816110   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.816449   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.816472   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.816661   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:39.816844   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.816999   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:39.817134   54475 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa Username:docker}
	I0914 00:45:39.901895   54475 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:45:39.906128   54475 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:45:39.906152   54475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:45:39.906210   54475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:45:39.906291   54475 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:45:39.906380   54475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:45:39.915819   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:45:39.941951   54475 start.go:296] duration metric: took 128.682026ms for postStartSetup
	I0914 00:45:39.942010   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetConfigRaw
	I0914 00:45:39.942598   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetIP
	I0914 00:45:39.945333   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.945663   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.945696   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.945953   54475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/config.json ...
	I0914 00:45:39.946173   54475 start.go:128] duration metric: took 24.733236677s to createHost
	I0914 00:45:39.946198   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:39.948392   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.948712   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:39.948747   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:39.948901   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:39.949068   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.949221   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:39.949361   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:39.949485   54475 main.go:141] libmachine: Using SSH client type: native
	I0914 00:45:39.949638   54475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.53 22 <nil> <nil>}
	I0914 00:45:39.949646   54475 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:45:40.060432   54475 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726274740.021164268
	
	I0914 00:45:40.060454   54475 fix.go:216] guest clock: 1726274740.021164268
	I0914 00:45:40.060460   54475 fix.go:229] Guest: 2024-09-14 00:45:40.021164268 +0000 UTC Remote: 2024-09-14 00:45:39.946184379 +0000 UTC m=+40.766358815 (delta=74.979889ms)
	I0914 00:45:40.060507   54475 fix.go:200] guest clock delta is within tolerance: 74.979889ms
	I0914 00:45:40.060522   54475 start.go:83] releasing machines lock for "kubernetes-upgrade-271886", held for 24.847778963s
	I0914 00:45:40.060550   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:40.060858   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetIP
	I0914 00:45:40.064015   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.064419   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:40.064449   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.064650   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:40.065221   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:40.065448   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .DriverName
	I0914 00:45:40.065545   54475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:45:40.065598   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:40.065666   54475 ssh_runner.go:195] Run: cat /version.json
	I0914 00:45:40.065692   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHHostname
	I0914 00:45:40.068588   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.069466   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:40.069508   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.069536   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.070104   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:40.070290   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:40.070312   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:40.070342   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:40.070473   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:40.070561   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHPort
	I0914 00:45:40.070643   54475 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa Username:docker}
	I0914 00:45:40.070696   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHKeyPath
	I0914 00:45:40.070809   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetSSHUsername
	I0914 00:45:40.070953   54475 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/kubernetes-upgrade-271886/id_rsa Username:docker}
	I0914 00:45:40.185836   54475 ssh_runner.go:195] Run: systemctl --version
	I0914 00:45:40.191873   54475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:45:40.355693   54475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:45:40.361740   54475 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:45:40.361817   54475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:45:40.379541   54475 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:45:40.379577   54475 start.go:495] detecting cgroup driver to use...
	I0914 00:45:40.379655   54475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:45:40.395871   54475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:45:40.409239   54475 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:45:40.409297   54475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:45:40.422431   54475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:45:40.438130   54475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:45:40.557307   54475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:45:40.702846   54475 docker.go:233] disabling docker service ...
	I0914 00:45:40.702924   54475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:45:40.717773   54475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:45:40.731482   54475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:45:40.868465   54475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:45:40.987922   54475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:45:41.002660   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:45:41.021047   54475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 00:45:41.021106   54475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:45:41.032029   54475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:45:41.032090   54475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:45:41.042701   54475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:45:41.053270   54475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:45:41.064017   54475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:45:41.076297   54475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:45:41.086525   54475 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:45:41.086596   54475 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:45:41.099847   54475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:45:41.109958   54475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:45:41.226924   54475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:45:41.325160   54475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:45:41.325230   54475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:45:41.330636   54475 start.go:563] Will wait 60s for crictl version
	I0914 00:45:41.330724   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:41.334671   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:45:41.373436   54475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:45:41.373514   54475 ssh_runner.go:195] Run: crio --version
	I0914 00:45:41.401977   54475 ssh_runner.go:195] Run: crio --version
	I0914 00:45:41.431462   54475 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 00:45:41.432448   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetIP
	I0914 00:45:41.435635   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:41.436043   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:45:29 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:45:41.436089   54475 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:45:41.436334   54475 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 00:45:41.440313   54475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:45:41.451779   54475 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:45:41.451927   54475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:45:41.451973   54475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:45:41.486412   54475 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 00:45:41.486480   54475 ssh_runner.go:195] Run: which lz4
	I0914 00:45:41.490281   54475 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 00:45:41.494302   54475 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 00:45:41.494346   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 00:45:42.973821   54475 crio.go:462] duration metric: took 1.483589756s to copy over tarball
	I0914 00:45:42.973935   54475 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 00:45:45.577973   54475 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.603997535s)
	I0914 00:45:45.578006   54475 crio.go:469] duration metric: took 2.604152194s to extract the tarball
	I0914 00:45:45.578031   54475 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 00:45:45.620926   54475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:45:45.665078   54475 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 00:45:45.665102   54475 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 00:45:45.665141   54475 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:45:45.665175   54475 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:45.665221   54475 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:45.665235   54475 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:45.665336   54475 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:45.665252   54475 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 00:45:45.665257   54475 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:45.665266   54475 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 00:45:45.666495   54475 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:45.666596   54475 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:45.666615   54475 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 00:45:45.666500   54475 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:45.666698   54475 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 00:45:45.666743   54475 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:45:45.666750   54475 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:45.666773   54475 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:45.897073   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 00:45:45.914622   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:45.952063   54475 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 00:45:45.952118   54475 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 00:45:45.952164   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:45.954900   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:45.956081   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:45.969806   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:45.973314   54475 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 00:45:45.973356   54475 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:45.973387   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:45:45.973397   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.009406   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 00:45:46.075412   54475 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 00:45:46.075463   54475 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:46.075514   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.084496   54475 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 00:45:46.084544   54475 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:46.084561   54475 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 00:45:46.084607   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:46.084621   54475 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:46.084651   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:45:46.084668   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.084590   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.090094   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:46.108106   54475 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 00:45:46.108154   54475 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 00:45:46.108169   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:46.108200   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.173993   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:46.174039   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:45:46.174070   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:46.174100   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:46.180183   54475 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 00:45:46.180223   54475 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:46.180288   54475 ssh_runner.go:195] Run: which crictl
	I0914 00:45:46.187662   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:45:46.187709   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:46.290528   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:45:46.290553   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 00:45:46.303724   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:46.303826   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:46.303872   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:46.311908   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:45:46.311944   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:45:46.424630   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 00:45:46.435259   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:45:46.435283   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:45:46.435299   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:46.435391   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:45:46.435437   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 00:45:46.514992   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 00:45:46.526974   54475 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:45:46.527021   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 00:45:46.527021   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 00:45:46.558523   54475 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 00:45:46.888498   54475 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:45:47.040641   54475 cache_images.go:92] duration metric: took 1.375523589s to LoadCachedImages
	W0914 00:45:47.040716   54475 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0914 00:45:47.040743   54475 kubeadm.go:934] updating node { 192.168.61.53 8443 v1.20.0 crio true true} ...
	I0914 00:45:47.040859   54475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-271886 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:45:47.040948   54475 ssh_runner.go:195] Run: crio config
	I0914 00:45:47.093123   54475 cni.go:84] Creating CNI manager for ""
	I0914 00:45:47.093149   54475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:45:47.093160   54475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:45:47.093187   54475 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-271886 NodeName:kubernetes-upgrade-271886 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 00:45:47.093419   54475 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-271886"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:45:47.093500   54475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 00:45:47.106239   54475 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:45:47.106322   54475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:45:47.117281   54475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0914 00:45:47.135388   54475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:45:47.152927   54475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 00:45:47.170779   54475 ssh_runner.go:195] Run: grep 192.168.61.53	control-plane.minikube.internal$ /etc/hosts
	I0914 00:45:47.174598   54475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:45:47.186609   54475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:45:47.299428   54475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:45:47.316333   54475 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886 for IP: 192.168.61.53
	I0914 00:45:47.316359   54475 certs.go:194] generating shared ca certs ...
	I0914 00:45:47.316380   54475 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.316558   54475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:45:47.316616   54475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:45:47.316629   54475 certs.go:256] generating profile certs ...
	I0914 00:45:47.316700   54475 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.key
	I0914 00:45:47.316730   54475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.crt with IP's: []
	I0914 00:45:47.412237   54475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.crt ...
	I0914 00:45:47.412265   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.crt: {Name:mk8565c10e1642535bc317d5f4861efdd779f8d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.412457   54475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.key ...
	I0914 00:45:47.412480   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.key: {Name:mkd2a379390205ba86f702b429e97174ad7675eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.412616   54475 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key.e27f8ad5
	I0914 00:45:47.412642   54475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt.e27f8ad5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.53]
	I0914 00:45:47.533711   54475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt.e27f8ad5 ...
	I0914 00:45:47.533745   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt.e27f8ad5: {Name:mk2327bd30358f2d15c0aebf5c03b7e9dc88d544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.533946   54475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key.e27f8ad5 ...
	I0914 00:45:47.533963   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key.e27f8ad5: {Name:mkb3bda76a4a78304f5b48b5850fed30355852c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.534071   54475 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt.e27f8ad5 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt
	I0914 00:45:47.534182   54475 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key.e27f8ad5 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key
	I0914 00:45:47.534284   54475 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key
	I0914 00:45:47.534312   54475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.crt with IP's: []
	I0914 00:45:47.809079   54475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.crt ...
	I0914 00:45:47.809121   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.crt: {Name:mk5bd83c1e67a472b6dee067f42cab5475acd185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.809290   54475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key ...
	I0914 00:45:47.809310   54475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key: {Name:mk89774d857eb9340bc7789820d53b83f28a9e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:45:47.809603   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:45:47.809659   54475 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:45:47.809673   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:45:47.809702   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:45:47.809725   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:45:47.809746   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:45:47.809785   54475 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:45:47.810366   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:45:47.835808   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:45:47.861159   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:45:47.884625   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:45:47.910430   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 00:45:47.933756   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:45:47.959737   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:45:47.989181   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:45:48.018373   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:45:48.044768   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:45:48.074286   54475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:45:48.102602   54475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:45:48.129328   54475 ssh_runner.go:195] Run: openssl version
	I0914 00:45:48.137838   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:45:48.151401   54475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:45:48.159724   54475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:45:48.159822   54475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:45:48.169120   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:45:48.187015   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:45:48.204905   54475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:45:48.210908   54475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:45:48.210985   54475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:45:48.219512   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:45:48.231264   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:45:48.242680   54475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:45:48.249229   54475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:45:48.249308   54475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:45:48.256950   54475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:45:48.273213   54475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:45:48.278805   54475 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:45:48.278864   54475 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:45:48.278932   54475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:45:48.278981   54475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:45:48.321557   54475 cri.go:89] found id: ""
	I0914 00:45:48.321630   54475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:45:48.332118   54475 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:45:48.343317   54475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:45:48.354558   54475 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:45:48.354581   54475 kubeadm.go:157] found existing configuration files:
	
	I0914 00:45:48.354637   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:45:48.365267   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:45:48.365342   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:45:48.376575   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:45:48.386290   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:45:48.386363   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:45:48.396810   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:45:48.407574   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:45:48.407648   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:45:48.418346   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:45:48.429290   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:45:48.429371   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:45:48.440028   54475 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:45:48.726454   54475 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:47:46.156129   54475 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 00:47:46.156327   54475 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 00:47:46.157419   54475 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 00:47:46.157515   54475 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:47:46.157709   54475 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:47:46.157869   54475 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:47:46.158048   54475 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 00:47:46.158190   54475 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:47:46.160560   54475 out.go:235]   - Generating certificates and keys ...
	I0914 00:47:46.160650   54475 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:47:46.160728   54475 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:47:46.160809   54475 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:47:46.160889   54475 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:47:46.160980   54475 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:47:46.161057   54475 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:47:46.161138   54475 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:47:46.161288   54475 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	I0914 00:47:46.161366   54475 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:47:46.161515   54475 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	I0914 00:47:46.161620   54475 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:47:46.161722   54475 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:47:46.161783   54475 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:47:46.161893   54475 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:47:46.161967   54475 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:47:46.162039   54475 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:47:46.162134   54475 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:47:46.162209   54475 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:47:46.162340   54475 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:47:46.162447   54475 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:47:46.162514   54475 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:47:46.162606   54475 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:47:46.164007   54475 out.go:235]   - Booting up control plane ...
	I0914 00:47:46.164100   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:47:46.164184   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:47:46.164296   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:47:46.164420   54475 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:47:46.164631   54475 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 00:47:46.164712   54475 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 00:47:46.164800   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:47:46.165032   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:47:46.165101   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:47:46.165287   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:47:46.165369   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:47:46.165603   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:47:46.165712   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:47:46.165888   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:47:46.165969   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:47:46.166206   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:47:46.166214   54475 kubeadm.go:310] 
	I0914 00:47:46.166253   54475 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 00:47:46.166294   54475 kubeadm.go:310] 		timed out waiting for the condition
	I0914 00:47:46.166302   54475 kubeadm.go:310] 
	I0914 00:47:46.166333   54475 kubeadm.go:310] 	This error is likely caused by:
	I0914 00:47:46.166361   54475 kubeadm.go:310] 		- The kubelet is not running
	I0914 00:47:46.166459   54475 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 00:47:46.166469   54475 kubeadm.go:310] 
	I0914 00:47:46.166605   54475 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 00:47:46.166654   54475 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 00:47:46.166711   54475 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 00:47:46.166720   54475 kubeadm.go:310] 
	I0914 00:47:46.166845   54475 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 00:47:46.166954   54475 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 00:47:46.166962   54475 kubeadm.go:310] 
	I0914 00:47:46.167088   54475 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 00:47:46.167171   54475 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 00:47:46.167279   54475 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 00:47:46.167390   54475 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 00:47:46.167417   54475 kubeadm.go:310] 
	W0914 00:47:46.167534   54475 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-271886 localhost] and IPs [192.168.61.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 00:47:46.167578   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 00:47:46.706119   54475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:47:46.720267   54475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:47:46.729423   54475 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:47:46.729444   54475 kubeadm.go:157] found existing configuration files:
	
	I0914 00:47:46.729486   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:47:46.738461   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:47:46.738516   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:47:46.747494   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:47:46.756266   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:47:46.756335   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:47:46.765745   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:47:46.774623   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:47:46.774695   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:47:46.784822   54475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:47:46.793669   54475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:47:46.793731   54475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:47:46.802768   54475 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:47:47.018820   54475 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:49:43.252832   54475 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 00:49:43.252975   54475 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 00:49:43.254422   54475 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 00:49:43.254479   54475 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:49:43.254584   54475 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:49:43.254693   54475 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:49:43.254822   54475 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 00:49:43.254940   54475 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:49:43.256643   54475 out.go:235]   - Generating certificates and keys ...
	I0914 00:49:43.256747   54475 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:49:43.256856   54475 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:49:43.256963   54475 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 00:49:43.257056   54475 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 00:49:43.257144   54475 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 00:49:43.257238   54475 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 00:49:43.257353   54475 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 00:49:43.257435   54475 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 00:49:43.257534   54475 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 00:49:43.257641   54475 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 00:49:43.257704   54475 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 00:49:43.257786   54475 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:49:43.257860   54475 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:49:43.257935   54475 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:49:43.258026   54475 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:49:43.258119   54475 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:49:43.258254   54475 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:49:43.258342   54475 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:49:43.258406   54475 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:49:43.258493   54475 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:49:43.260033   54475 out.go:235]   - Booting up control plane ...
	I0914 00:49:43.260127   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:49:43.260212   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:49:43.260285   54475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:49:43.260370   54475 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:49:43.260508   54475 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 00:49:43.260567   54475 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 00:49:43.260641   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:49:43.260832   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:49:43.260892   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:49:43.261051   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:49:43.261118   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:49:43.261320   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:49:43.261411   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:49:43.261648   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:49:43.261739   54475 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:49:43.261988   54475 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:49:43.262002   54475 kubeadm.go:310] 
	I0914 00:49:43.262049   54475 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 00:49:43.262111   54475 kubeadm.go:310] 		timed out waiting for the condition
	I0914 00:49:43.262128   54475 kubeadm.go:310] 
	I0914 00:49:43.262180   54475 kubeadm.go:310] 	This error is likely caused by:
	I0914 00:49:43.262231   54475 kubeadm.go:310] 		- The kubelet is not running
	I0914 00:49:43.262580   54475 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 00:49:43.262595   54475 kubeadm.go:310] 
	I0914 00:49:43.262747   54475 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 00:49:43.262782   54475 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 00:49:43.262812   54475 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 00:49:43.262819   54475 kubeadm.go:310] 
	I0914 00:49:43.262922   54475 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 00:49:43.263014   54475 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 00:49:43.263025   54475 kubeadm.go:310] 
	I0914 00:49:43.263153   54475 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 00:49:43.263241   54475 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 00:49:43.263314   54475 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 00:49:43.263401   54475 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 00:49:43.263441   54475 kubeadm.go:310] 
	I0914 00:49:43.263490   54475 kubeadm.go:394] duration metric: took 3m54.98462824s to StartCluster
	I0914 00:49:43.263531   54475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:49:43.263585   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:49:43.305631   54475 cri.go:89] found id: ""
	I0914 00:49:43.305656   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.305665   54475 logs.go:278] No container was found matching "kube-apiserver"
	I0914 00:49:43.305672   54475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:49:43.305722   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:49:43.342483   54475 cri.go:89] found id: ""
	I0914 00:49:43.342508   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.342515   54475 logs.go:278] No container was found matching "etcd"
	I0914 00:49:43.342522   54475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:49:43.342582   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:49:43.380056   54475 cri.go:89] found id: ""
	I0914 00:49:43.380084   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.380092   54475 logs.go:278] No container was found matching "coredns"
	I0914 00:49:43.380097   54475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:49:43.380149   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:49:43.414358   54475 cri.go:89] found id: ""
	I0914 00:49:43.414384   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.414397   54475 logs.go:278] No container was found matching "kube-scheduler"
	I0914 00:49:43.414403   54475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:49:43.414450   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:49:43.452963   54475 cri.go:89] found id: ""
	I0914 00:49:43.452990   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.452997   54475 logs.go:278] No container was found matching "kube-proxy"
	I0914 00:49:43.453003   54475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:49:43.453051   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:49:43.488338   54475 cri.go:89] found id: ""
	I0914 00:49:43.488375   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.488387   54475 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 00:49:43.488395   54475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:49:43.488461   54475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:49:43.523353   54475 cri.go:89] found id: ""
	I0914 00:49:43.523384   54475 logs.go:276] 0 containers: []
	W0914 00:49:43.523393   54475 logs.go:278] No container was found matching "kindnet"
	I0914 00:49:43.523404   54475 logs.go:123] Gathering logs for kubelet ...
	I0914 00:49:43.523419   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 00:49:43.576809   54475 logs.go:123] Gathering logs for dmesg ...
	I0914 00:49:43.576848   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:49:43.590295   54475 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:49:43.590325   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 00:49:43.711218   54475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 00:49:43.711251   54475 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:49:43.711267   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:49:43.815194   54475 logs.go:123] Gathering logs for container status ...
	I0914 00:49:43.815241   54475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 00:49:43.855925   54475 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 00:49:43.855982   54475 out.go:270] * 
	* 
	W0914 00:49:43.856039   54475 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 00:49:43.856053   54475 out.go:270] * 
	* 
	W0914 00:49:43.856917   54475 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:49:43.859974   54475 out.go:201] 
	W0914 00:49:43.861409   54475 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 00:49:43.861473   54475 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 00:49:43.861499   54475 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 00:49:43.863543   54475 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-271886
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-271886: (1.307487991s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-271886 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-271886 status --format={{.Host}}: exit status 7 (64.058879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.615590441s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-271886 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (79.479279ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-271886] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-271886
	    minikube start -p kubernetes-upgrade-271886 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2718862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-271886 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-271886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.586049506s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-14 00:51:01.637774464 +0000 UTC m=+5076.065419666
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-271886 -n kubernetes-upgrade-271886
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-271886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-271886 logs -n 25: (1.73001035s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo docker                        | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo cat                           | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo                               | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo find                          | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-670449 sudo crio                          | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-670449                                    | kindnet-670449        | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC | 14 Sep 24 00:50 UTC |
	| start   | -p custom-flannel-670449                             | custom-flannel-670449 | jenkins | v1.34.0 | 14 Sep 24 00:50 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:50:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:50:45.742658   61263 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:50:45.742845   61263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:50:45.742858   61263 out.go:358] Setting ErrFile to fd 2...
	I0914 00:50:45.742866   61263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:50:45.743136   61263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:50:45.743980   61263 out.go:352] Setting JSON to false
	I0914 00:50:45.745484   61263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5592,"bootTime":1726269454,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:50:45.745620   61263 start.go:139] virtualization: kvm guest
	I0914 00:50:45.748020   61263 out.go:177] * [custom-flannel-670449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:50:45.749563   61263 notify.go:220] Checking for updates...
	I0914 00:50:45.749578   61263 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:50:45.750964   61263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:50:45.752300   61263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:50:45.753561   61263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:50:45.754957   61263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:50:45.756201   61263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:50:45.758232   61263 config.go:182] Loaded profile config "calico-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:50:45.758392   61263 config.go:182] Loaded profile config "kubernetes-upgrade-271886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:50:45.758584   61263 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:50:45.758708   61263 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:50:45.807565   61263 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:50:45.808792   61263 start.go:297] selected driver: kvm2
	I0914 00:50:45.808818   61263 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:50:45.808835   61263 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:50:45.809927   61263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:50:45.810053   61263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:50:45.828636   61263 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:50:45.828697   61263 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:50:45.829075   61263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:50:45.829137   61263 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 00:50:45.829155   61263 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 00:50:45.829241   61263 start.go:340] cluster config:
	{Name:custom-flannel-670449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:50:45.829407   61263 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:50:45.831301   61263 out.go:177] * Starting "custom-flannel-670449" primary control-plane node in "custom-flannel-670449" cluster
	I0914 00:50:45.832514   61263 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:50:45.832593   61263 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:50:45.832607   61263 cache.go:56] Caching tarball of preloaded images
	I0914 00:50:45.832696   61263 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:50:45.832708   61263 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:50:45.832825   61263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/config.json ...
	I0914 00:50:45.832853   61263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/config.json: {Name:mkbd9ccc8a0b73450b24c94def9bbe31ed8102b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:50:45.833056   61263 start.go:360] acquireMachinesLock for custom-flannel-670449: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:50:45.833105   61263 start.go:364] duration metric: took 26.758µs to acquireMachinesLock for "custom-flannel-670449"
	I0914 00:50:45.833132   61263 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-670449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:50:45.833247   61263 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 00:50:44.442341   59785 main.go:141] libmachine: (kubernetes-upgrade-271886) Calling .GetIP
	I0914 00:50:44.445442   59785 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:50:44.445854   59785 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:e8", ip: ""} in network mk-kubernetes-upgrade-271886: {Iface:virbr2 ExpiryTime:2024-09-14 01:49:56 +0000 UTC Type:0 Mac:52:54:00:de:bd:e8 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:kubernetes-upgrade-271886 Clientid:01:52:54:00:de:bd:e8}
	I0914 00:50:44.445887   59785 main.go:141] libmachine: (kubernetes-upgrade-271886) DBG | domain kubernetes-upgrade-271886 has defined IP address 192.168.61.53 and MAC address 52:54:00:de:bd:e8 in network mk-kubernetes-upgrade-271886
	I0914 00:50:44.446147   59785 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 00:50:44.450422   59785 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:50:44.450511   59785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:50:44.450553   59785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:50:44.494938   59785 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:50:44.494963   59785 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:50:44.495019   59785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:50:44.540916   59785 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:50:44.540942   59785 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:50:44.540952   59785 kubeadm.go:934] updating node { 192.168.61.53 8443 v1.31.1 crio true true} ...
	I0914 00:50:44.541079   59785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-271886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:50:44.541160   59785 ssh_runner.go:195] Run: crio config
	I0914 00:50:44.698929   59785 cni.go:84] Creating CNI manager for ""
	I0914 00:50:44.698953   59785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:50:44.698964   59785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:50:44.698993   59785 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.53 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-271886 NodeName:kubernetes-upgrade-271886 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:50:44.699154   59785 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-271886"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:50:44.699222   59785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:50:44.771638   59785 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:50:44.771719   59785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:50:44.901642   59785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0914 00:50:45.077795   59785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:50:45.252352   59785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0914 00:50:45.293221   59785 ssh_runner.go:195] Run: grep 192.168.61.53	control-plane.minikube.internal$ /etc/hosts
	I0914 00:50:45.304605   59785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:50:45.507361   59785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:50:45.541510   59785 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886 for IP: 192.168.61.53
	I0914 00:50:45.541538   59785 certs.go:194] generating shared ca certs ...
	I0914 00:50:45.541561   59785 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:50:45.541753   59785 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:50:45.541814   59785 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:50:45.541828   59785 certs.go:256] generating profile certs ...
	I0914 00:50:45.541941   59785 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/client.key
	I0914 00:50:45.542060   59785 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key.e27f8ad5
	I0914 00:50:45.542129   59785 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key
	I0914 00:50:45.542375   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:50:45.542426   59785 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:50:45.542438   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:50:45.542474   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:50:45.542516   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:50:45.542546   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:50:45.542597   59785 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:50:45.543383   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:50:45.603763   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:50:45.678778   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:50:45.724320   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:50:45.770167   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 00:50:45.810994   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:50:45.845965   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:50:45.900285   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kubernetes-upgrade-271886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:50:45.935160   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:50:45.968576   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:50:46.002610   59785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:50:46.031422   59785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:50:46.054172   59785 ssh_runner.go:195] Run: openssl version
	I0914 00:50:46.062861   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:50:46.075430   59785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:50:46.081549   59785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:50:46.081625   59785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:50:46.090252   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:50:46.100258   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:50:46.118182   59785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:50:46.123001   59785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:50:46.123080   59785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:50:46.129911   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:50:46.143354   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:50:46.158977   59785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:50:46.165310   59785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:50:46.165393   59785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:50:46.171874   59785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:50:46.183711   59785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:50:46.188867   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:50:46.196724   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:50:46.204930   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:50:46.211851   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:50:46.218025   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:50:46.224361   59785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:50:46.232386   59785 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-271886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-271886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:50:46.232485   59785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:50:46.232545   59785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:50:46.276870   59785 cri.go:89] found id: "1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4"
	I0914 00:50:46.276905   59785 cri.go:89] found id: "48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61"
	I0914 00:50:46.276911   59785 cri.go:89] found id: "52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d"
	I0914 00:50:46.276917   59785 cri.go:89] found id: "e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e"
	I0914 00:50:46.276921   59785 cri.go:89] found id: "b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf"
	I0914 00:50:46.276927   59785 cri.go:89] found id: "b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f"
	I0914 00:50:46.276931   59785 cri.go:89] found id: "4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465"
	I0914 00:50:46.276936   59785 cri.go:89] found id: "2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb"
	I0914 00:50:46.276948   59785 cri.go:89] found id: "78222062fc8172b8ee11191f3b18338c1da9da9958d4e3a4538104bd96dfd4ca"
	I0914 00:50:46.276957   59785 cri.go:89] found id: ""
	I0914 00:50:46.277019   59785 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.403162447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275062403139396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c772ad6-8215-4627-bd2c-803b6d176a8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.403623595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecb258b6-a37c-4996-8f1b-28a6dd09db23 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.403726945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecb258b6-a37c-4996-8f1b-28a6dd09db23 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.404049026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201aff5cc804e27f903ef4431b5bb4eb362594fa65a29ac5d3950f51f1848125,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059366166857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1622449526778eef93b61ff2d6280d1a20a0bc7c524049a338be09417090d0a,PodSandboxId:9194e673b77726f0ebca66e396c7284f5a0881d96935ab71857571e10ce43241,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059388213106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67e49477737a4edec69b2b7fea90c0908f0ee2bc2c10f2f965e25ee14fe7a69,PodSandboxId:75ffdf48ecb999cb7b9e98541ab1528ed2d3ca3664dedc0356b477cdd1e57f64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726275059381158369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692013ef9998abe65477be53c45f0665b69b980dcc225a8539798a1b5483607,PodSandboxId:a8a9990ad44c220040815f99dfe0c805e10ea632ad9ee57c8a490c8a65c6a8e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726275059338985533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a567707b8572cbe90cc25fdcba29c0a90754614ec38a7a50eef785afb614a8ea,PodSandboxId:e10db371df067bdc4b9dffd4681cc8d82ca3c8ea261f6381c3067567756e4571,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275058543791354,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c52bc6cc5a5536f148041625df5ff2acc782d4d36213e2cbbf97951635b35f7,PodSandboxId:0794c40c1b250cded512d6fc3cd30902b1e706f1adae382735daa2ad05551b95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275058554081
308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932bac785f3d491f517902edc80f5ad67a6ae204e5a918a67c07c3d3f2e5f59,PodSandboxId:390ad9214ba7868b67fa5b3719cc12d9da36297600643c8278870c1376faed06,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275054114
700127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f862665c99809845ff8d44084a278afc1ed7d275575a86b99123d0955aa2fb,PodSandboxId:c60d2011c852a8d7d84cb3464bddb773a06c90b3d673e41ded9ca1aab7bce297,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275054051946593,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275045631022197,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d,PodSandboxId:b76b61307ab9296d91f33039f850da3fb67f407bfd910e6bff518521e962aa11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726275042838227329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e,PodSandboxId:d26e59640e03a3bdd708d1af3195ba7243c492f0e7ad857fbdaf51883af74259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275042786745892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61,PodSandboxId:97a1ac4443f602aecf83c6f70f774f1b30d48bbc3e13112b24bf7b33f4f705ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275043395870136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf,PodSandboxId:fae1dbec51cbfc2bf3b530e0fa34cde52c7a16ccd64d078466f8dbb1f72ddf15,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275042232851004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f,PodSandboxId:cf65020b07169054e40cb64035089aa6c8fb81a8726e394d9e58fcf515152f18,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726275042020722142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465,PodSandboxId:ff3a0c310fdbadd12fc52db6fb1345e7c6bef3fbf5e3419c8501848609fa4e8b,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275042019654492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb,PodSandboxId:2c8f75a0141b4ad102cef7360e70ff3991a17a0ce3a92be9de4d1ca9d9d7687c,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726275041960829413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecb258b6-a37c-4996-8f1b-28a6dd09db23 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.449885620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e757d4d9-cb3c-4f41-a99c-c6aed7867f6f name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.449989450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e757d4d9-cb3c-4f41-a99c-c6aed7867f6f name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.451397838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0249c17-b50b-4d7b-a381-4cbf9bf3d761 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.451914395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275062451888019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0249c17-b50b-4d7b-a381-4cbf9bf3d761 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.452457758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f78ecdd-4247-486d-8e40-7023fce49173 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.452514421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f78ecdd-4247-486d-8e40-7023fce49173 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.452868660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201aff5cc804e27f903ef4431b5bb4eb362594fa65a29ac5d3950f51f1848125,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059366166857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1622449526778eef93b61ff2d6280d1a20a0bc7c524049a338be09417090d0a,PodSandboxId:9194e673b77726f0ebca66e396c7284f5a0881d96935ab71857571e10ce43241,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059388213106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67e49477737a4edec69b2b7fea90c0908f0ee2bc2c10f2f965e25ee14fe7a69,PodSandboxId:75ffdf48ecb999cb7b9e98541ab1528ed2d3ca3664dedc0356b477cdd1e57f64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726275059381158369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692013ef9998abe65477be53c45f0665b69b980dcc225a8539798a1b5483607,PodSandboxId:a8a9990ad44c220040815f99dfe0c805e10ea632ad9ee57c8a490c8a65c6a8e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726275059338985533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a567707b8572cbe90cc25fdcba29c0a90754614ec38a7a50eef785afb614a8ea,PodSandboxId:e10db371df067bdc4b9dffd4681cc8d82ca3c8ea261f6381c3067567756e4571,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275058543791354,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c52bc6cc5a5536f148041625df5ff2acc782d4d36213e2cbbf97951635b35f7,PodSandboxId:0794c40c1b250cded512d6fc3cd30902b1e706f1adae382735daa2ad05551b95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275058554081
308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932bac785f3d491f517902edc80f5ad67a6ae204e5a918a67c07c3d3f2e5f59,PodSandboxId:390ad9214ba7868b67fa5b3719cc12d9da36297600643c8278870c1376faed06,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275054114
700127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f862665c99809845ff8d44084a278afc1ed7d275575a86b99123d0955aa2fb,PodSandboxId:c60d2011c852a8d7d84cb3464bddb773a06c90b3d673e41ded9ca1aab7bce297,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275054051946593,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275045631022197,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d,PodSandboxId:b76b61307ab9296d91f33039f850da3fb67f407bfd910e6bff518521e962aa11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726275042838227329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e,PodSandboxId:d26e59640e03a3bdd708d1af3195ba7243c492f0e7ad857fbdaf51883af74259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275042786745892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61,PodSandboxId:97a1ac4443f602aecf83c6f70f774f1b30d48bbc3e13112b24bf7b33f4f705ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275043395870136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf,PodSandboxId:fae1dbec51cbfc2bf3b530e0fa34cde52c7a16ccd64d078466f8dbb1f72ddf15,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275042232851004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f,PodSandboxId:cf65020b07169054e40cb64035089aa6c8fb81a8726e394d9e58fcf515152f18,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726275042020722142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465,PodSandboxId:ff3a0c310fdbadd12fc52db6fb1345e7c6bef3fbf5e3419c8501848609fa4e8b,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275042019654492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb,PodSandboxId:2c8f75a0141b4ad102cef7360e70ff3991a17a0ce3a92be9de4d1ca9d9d7687c,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726275041960829413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f78ecdd-4247-486d-8e40-7023fce49173 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.499675333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6268fd0d-b2e4-4f93-9984-b9094095bcfa name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.499759856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6268fd0d-b2e4-4f93-9984-b9094095bcfa name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.500628152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f77b24b-3e93-46c9-936f-b0249a09a848 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.500979047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275062500956587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f77b24b-3e93-46c9-936f-b0249a09a848 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.501499153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1f44c14-f535-4bec-8bbd-ae067393ae82 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.501549732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1f44c14-f535-4bec-8bbd-ae067393ae82 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.501976035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201aff5cc804e27f903ef4431b5bb4eb362594fa65a29ac5d3950f51f1848125,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059366166857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1622449526778eef93b61ff2d6280d1a20a0bc7c524049a338be09417090d0a,PodSandboxId:9194e673b77726f0ebca66e396c7284f5a0881d96935ab71857571e10ce43241,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059388213106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67e49477737a4edec69b2b7fea90c0908f0ee2bc2c10f2f965e25ee14fe7a69,PodSandboxId:75ffdf48ecb999cb7b9e98541ab1528ed2d3ca3664dedc0356b477cdd1e57f64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726275059381158369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692013ef9998abe65477be53c45f0665b69b980dcc225a8539798a1b5483607,PodSandboxId:a8a9990ad44c220040815f99dfe0c805e10ea632ad9ee57c8a490c8a65c6a8e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726275059338985533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a567707b8572cbe90cc25fdcba29c0a90754614ec38a7a50eef785afb614a8ea,PodSandboxId:e10db371df067bdc4b9dffd4681cc8d82ca3c8ea261f6381c3067567756e4571,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275058543791354,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c52bc6cc5a5536f148041625df5ff2acc782d4d36213e2cbbf97951635b35f7,PodSandboxId:0794c40c1b250cded512d6fc3cd30902b1e706f1adae382735daa2ad05551b95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275058554081
308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932bac785f3d491f517902edc80f5ad67a6ae204e5a918a67c07c3d3f2e5f59,PodSandboxId:390ad9214ba7868b67fa5b3719cc12d9da36297600643c8278870c1376faed06,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275054114
700127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f862665c99809845ff8d44084a278afc1ed7d275575a86b99123d0955aa2fb,PodSandboxId:c60d2011c852a8d7d84cb3464bddb773a06c90b3d673e41ded9ca1aab7bce297,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275054051946593,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275045631022197,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d,PodSandboxId:b76b61307ab9296d91f33039f850da3fb67f407bfd910e6bff518521e962aa11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726275042838227329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e,PodSandboxId:d26e59640e03a3bdd708d1af3195ba7243c492f0e7ad857fbdaf51883af74259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275042786745892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61,PodSandboxId:97a1ac4443f602aecf83c6f70f774f1b30d48bbc3e13112b24bf7b33f4f705ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275043395870136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf,PodSandboxId:fae1dbec51cbfc2bf3b530e0fa34cde52c7a16ccd64d078466f8dbb1f72ddf15,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275042232851004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f,PodSandboxId:cf65020b07169054e40cb64035089aa6c8fb81a8726e394d9e58fcf515152f18,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726275042020722142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465,PodSandboxId:ff3a0c310fdbadd12fc52db6fb1345e7c6bef3fbf5e3419c8501848609fa4e8b,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275042019654492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb,PodSandboxId:2c8f75a0141b4ad102cef7360e70ff3991a17a0ce3a92be9de4d1ca9d9d7687c,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726275041960829413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1f44c14-f535-4bec-8bbd-ae067393ae82 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.555631978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b631173-0e18-4033-812c-7ad17988770a name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.555701621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b631173-0e18-4033-812c-7ad17988770a name=/runtime.v1.RuntimeService/Version
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.559810439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87066567-3dd1-44cd-8e0c-cfe0ba20e26b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.560192936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275062560167538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87066567-3dd1-44cd-8e0c-cfe0ba20e26b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.560836679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2677649-c8a3-4771-8f89-5078ed68dab3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.560894162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2677649-c8a3-4771-8f89-5078ed68dab3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:51:02 kubernetes-upgrade-271886 crio[2977]: time="2024-09-14 00:51:02.561370241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201aff5cc804e27f903ef4431b5bb4eb362594fa65a29ac5d3950f51f1848125,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059366166857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1622449526778eef93b61ff2d6280d1a20a0bc7c524049a338be09417090d0a,PodSandboxId:9194e673b77726f0ebca66e396c7284f5a0881d96935ab71857571e10ce43241,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275059388213106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67e49477737a4edec69b2b7fea90c0908f0ee2bc2c10f2f965e25ee14fe7a69,PodSandboxId:75ffdf48ecb999cb7b9e98541ab1528ed2d3ca3664dedc0356b477cdd1e57f64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726275059381158369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692013ef9998abe65477be53c45f0665b69b980dcc225a8539798a1b5483607,PodSandboxId:a8a9990ad44c220040815f99dfe0c805e10ea632ad9ee57c8a490c8a65c6a8e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726275059338985533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a567707b8572cbe90cc25fdcba29c0a90754614ec38a7a50eef785afb614a8ea,PodSandboxId:e10db371df067bdc4b9dffd4681cc8d82ca3c8ea261f6381c3067567756e4571,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275058543791354,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c52bc6cc5a5536f148041625df5ff2acc782d4d36213e2cbbf97951635b35f7,PodSandboxId:0794c40c1b250cded512d6fc3cd30902b1e706f1adae382735daa2ad05551b95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275058554081
308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932bac785f3d491f517902edc80f5ad67a6ae204e5a918a67c07c3d3f2e5f59,PodSandboxId:390ad9214ba7868b67fa5b3719cc12d9da36297600643c8278870c1376faed06,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275054114
700127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f862665c99809845ff8d44084a278afc1ed7d275575a86b99123d0955aa2fb,PodSandboxId:c60d2011c852a8d7d84cb3464bddb773a06c90b3d673e41ded9ca1aab7bce297,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275054051946593,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4,PodSandboxId:663ee9c619a1e1350bc4889ee07c3d44059a7025021abfa0a806116524bc9e97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275045631022197,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8bfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3883427d-e905-4b2f-bfa7-7cb210e0faec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d,PodSandboxId:b76b61307ab9296d91f33039f850da3fb67f407bfd910e6bff518521e962aa11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726275042838227329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7d8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5473443-8189-476e-951c-dee698bed6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e,PodSandboxId:d26e59640e03a3bdd708d1af3195ba7243c492f0e7ad857fbdaf51883af74259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275042786745892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a3d8d7-5219-4b12-ac01-e1218ae4b90b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61,PodSandboxId:97a1ac4443f602aecf83c6f70f774f1b30d48bbc3e13112b24bf7b33f4f705ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726275043395870136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x42m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e718969d-0dbd-43ef-bfc7-54e4aeff6185,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf,PodSandboxId:fae1dbec51cbfc2bf3b530e0fa34cde52c7a16ccd64d078466f8dbb1f72ddf15,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275042232851004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0310b29489cd70cfcac80b7f5c0608eb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f,PodSandboxId:cf65020b07169054e40cb64035089aa6c8fb81a8726e394d9e58fcf515152f18,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726275042020722142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d2463e646bd9bf1784d7329d4374c8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465,PodSandboxId:ff3a0c310fdbadd12fc52db6fb1345e7c6bef3fbf5e3419c8501848609fa4e8b,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275042019654492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d260753f79fba711d52fdcb51c0baa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb,PodSandboxId:2c8f75a0141b4ad102cef7360e70ff3991a17a0ce3a92be9de4d1ca9d9d7687c,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726275041960829413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-271886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58d173a30ba425a763d009daafddd5b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2677649-c8a3-4771-8f89-5078ed68dab3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f162244952677       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   9194e673b7772       coredns-7c65d6cfc9-x42m4
	c67e49477737a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   75ffdf48ecb99       storage-provisioner
	201aff5cc804e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   663ee9c619a1e       coredns-7c65d6cfc9-n8bfg
	5692013ef9998       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   a8a9990ad44c2       kube-proxy-m7d8q
	5c52bc6cc5a55       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   4 seconds ago       Running             kube-controller-manager   2                   0794c40c1b250       kube-controller-manager-kubernetes-upgrade-271886
	a567707b8572c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   4 seconds ago       Running             kube-scheduler            2                   e10db371df067       kube-scheduler-kubernetes-upgrade-271886
	1932bac785f3d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      2                   390ad9214ba78       etcd-kubernetes-upgrade-271886
	44f862665c998       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            2                   c60d2011c852a       kube-apiserver-kubernetes-upgrade-271886
	1e1cc2ab8e331       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 seconds ago      Exited              coredns                   1                   663ee9c619a1e       coredns-7c65d6cfc9-n8bfg
	48ec12fd10eab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Exited              coredns                   1                   97a1ac4443f60       coredns-7c65d6cfc9-x42m4
	52b6a82d81125       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Exited              kube-proxy                1                   b76b61307ab92       kube-proxy-m7d8q
	e8ce763a6b4de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago      Exited              storage-provisioner       1                   d26e59640e03a       storage-provisioner
	b420a306743b4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   20 seconds ago      Exited              kube-controller-manager   1                   fae1dbec51cbf       kube-controller-manager-kubernetes-upgrade-271886
	b703308201aef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   20 seconds ago      Exited              kube-scheduler            1                   cf65020b07169       kube-scheduler-kubernetes-upgrade-271886
	4ea07f51a113b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 seconds ago      Exited              kube-apiserver            1                   ff3a0c310fdba       kube-apiserver-kubernetes-upgrade-271886
	2b5d9b0ab67ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago      Exited              etcd                      1                   2c8f75a0141b4       etcd-kubernetes-upgrade-271886
	
	
	==> coredns [1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [201aff5cc804e27f903ef4431b5bb4eb362594fa65a29ac5d3950f51f1848125] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61] <==
	
	
	==> coredns [f1622449526778eef93b61ff2d6280d1a20a0bc7c524049a338be09417090d0a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-271886
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-271886
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-271886
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:50:58 +0000   Sat, 14 Sep 2024 00:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:50:58 +0000   Sat, 14 Sep 2024 00:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:50:58 +0000   Sat, 14 Sep 2024 00:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:50:58 +0000   Sat, 14 Sep 2024 00:50:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.53
	  Hostname:    kubernetes-upgrade-271886
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d2dac6ed83c435187ff3df5598b5ddb
	  System UUID:                2d2dac6e-d83c-4351-87ff-3df5598b5ddb
	  Boot ID:                    88c1ef10-5b07-465c-9255-dc14602bd3ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-n8bfg                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     37s
	  kube-system                 coredns-7c65d6cfc9-x42m4                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     37s
	  kube-system                 etcd-kubernetes-upgrade-271886                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         38s
	  kube-system                 kube-apiserver-kubernetes-upgrade-271886             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-271886    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-m7d8q                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-271886             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasSufficientPID
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           38s                node-controller  Node kubernetes-upgrade-271886 event: Registered Node kubernetes-upgrade-271886 in Controller
	  Normal  Starting                 5s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s                 kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet          Node kubernetes-upgrade-271886 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-271886 event: Registered Node kubernetes-upgrade-271886 in Controller
	
	
	==> dmesg <==
	[  +2.365439] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 00:50] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.067207] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067812] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.171949] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.143629] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.287675] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.224555] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +2.111745] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.077812] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.094121] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.108106] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.195652] kauditd_printk_skb: 101 callbacks suppressed
	[ +13.754920] systemd-fstab-generator[2159]: Ignoring "noauto" option for root device
	[  +0.154324] systemd-fstab-generator[2171]: Ignoring "noauto" option for root device
	[  +0.441733] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.364309] systemd-fstab-generator[2477]: Ignoring "noauto" option for root device
	[  +1.130532] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +2.088526] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +8.737286] kauditd_printk_skb: 301 callbacks suppressed
	[  +3.569825] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[  +1.861566] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.740814] systemd-fstab-generator[4367]: Ignoring "noauto" option for root device
	
	
	==> etcd [1932bac785f3d491f517902edc80f5ad67a6ae204e5a918a67c07c3d3f2e5f59] <==
	{"level":"info","ts":"2024-09-14T00:50:54.305705Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:50:54.307671Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:50:54.310290Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:50:54.310562Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.53:2380"}
	{"level":"info","ts":"2024-09-14T00:50:54.310608Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.53:2380"}
	{"level":"info","ts":"2024-09-14T00:50:54.310655Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6eb465cec3b0f5","initial-advertise-peer-urls":["https://192.168.61.53:2380"],"listen-peer-urls":["https://192.168.61.53:2380"],"advertise-client-urls":["https://192.168.61.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:50:54.310720Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:50:55.588503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T00:50:55.588636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:50:55.588703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 received MsgPreVoteResp from 6eb465cec3b0f5 at term 2"}
	{"level":"info","ts":"2024-09-14T00:50:55.588738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:50:55.588763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 received MsgVoteResp from 6eb465cec3b0f5 at term 3"}
	{"level":"info","ts":"2024-09-14T00:50:55.588790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:50:55.588815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6eb465cec3b0f5 elected leader 6eb465cec3b0f5 at term 3"}
	{"level":"info","ts":"2024-09-14T00:50:55.594729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:50:55.594687Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6eb465cec3b0f5","local-member-attributes":"{Name:kubernetes-upgrade-271886 ClientURLs:[https://192.168.61.53:2379]}","request-path":"/0/members/6eb465cec3b0f5/attributes","cluster-id":"89af12137150062a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:50:55.595685Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:50:55.595801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:50:55.596009Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:50:55.596051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:50:55.596637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.53:2379"}
	{"level":"info","ts":"2024-09-14T00:50:55.596831Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:50:55.598074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-14T00:51:00.860699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.586895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-09-14T00:51:00.860767Z","caller":"traceutil/trace.go:171","msg":"trace[1857351497] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:436; }","duration":"188.680763ms","start":"2024-09-14T00:51:00.672073Z","end":"2024-09-14T00:51:00.860754Z","steps":["trace[1857351497] 'range keys from in-memory index tree'  (duration: 182.251799ms)"],"step_count":1}
	
	
	==> etcd [2b5d9b0ab67ca6d600225f19534c1a462022e0bc735c1407ec13ff5bac0e43cb] <==
	{"level":"info","ts":"2024-09-14T00:50:42.669871Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-14T00:50:42.723742Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"89af12137150062a","local-member-id":"6eb465cec3b0f5","commit-index":396}
	{"level":"info","ts":"2024-09-14T00:50:42.723892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-14T00:50:42.723940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 became follower at term 2"}
	{"level":"info","ts":"2024-09-14T00:50:42.723955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6eb465cec3b0f5 [peers: [], term: 2, commit: 396, applied: 0, lastindex: 396, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-14T00:50:42.725788Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-14T00:50:42.778535Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":388}
	{"level":"info","ts":"2024-09-14T00:50:42.833688Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-14T00:50:42.866032Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6eb465cec3b0f5","timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:50:42.866284Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6eb465cec3b0f5"}
	{"level":"info","ts":"2024-09-14T00:50:42.867416Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6eb465cec3b0f5","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-14T00:50:42.867688Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-14T00:50:42.867840Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T00:50:42.867876Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T00:50:42.867915Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T00:50:42.875688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eb465cec3b0f5 switched to configuration voters=(31160596791800053)"}
	{"level":"info","ts":"2024-09-14T00:50:42.875759Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"89af12137150062a","local-member-id":"6eb465cec3b0f5","added-peer-id":"6eb465cec3b0f5","added-peer-peer-urls":["https://192.168.61.53:2380"]}
	{"level":"info","ts":"2024-09-14T00:50:42.875856Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"89af12137150062a","local-member-id":"6eb465cec3b0f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:50:42.875880Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:50:42.911514Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:50:42.966701Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:50:42.966945Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6eb465cec3b0f5","initial-advertise-peer-urls":["https://192.168.61.53:2380"],"listen-peer-urls":["https://192.168.61.53:2380"],"advertise-client-urls":["https://192.168.61.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:50:42.966975Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:50:42.967089Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.53:2380"}
	{"level":"info","ts":"2024-09-14T00:50:42.967102Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.53:2380"}
	
	
	==> kernel <==
	 00:51:03 up 1 min,  0 users,  load average: 2.30, 0.61, 0.20
	Linux kubernetes-upgrade-271886 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [44f862665c99809845ff8d44084a278afc1ed7d275575a86b99123d0955aa2fb] <==
	I0914 00:50:57.496576       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 00:50:57.498168       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:50:57.506393       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 00:50:57.513713       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 00:50:57.514055       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 00:50:57.514383       1 aggregator.go:171] initial CRD sync complete...
	I0914 00:50:57.514407       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 00:50:57.514413       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 00:50:57.514419       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:50:57.525060       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 00:50:57.526046       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 00:50:57.526147       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 00:50:57.526182       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 00:50:57.528138       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 00:50:57.532821       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 00:50:57.532845       1 policy_source.go:224] refreshing policies
	I0914 00:50:57.621699       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:50:58.403992       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 00:50:59.790018       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 00:50:59.820066       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:50:59.861095       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:51:00.003038       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:51:00.108810       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:51:00.136532       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 00:51:02.327650       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4ea07f51a113b745c9573fbfa03d6835c1d4d7de94fef98bc45112f326339465] <==
	I0914 00:50:42.697941       1 options.go:228] external host was not specified, using 192.168.61.53
	I0914 00:50:42.708660       1 server.go:142] Version: v1.31.1
	I0914 00:50:42.708729       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [5c52bc6cc5a5536f148041625df5ff2acc782d4d36213e2cbbf97951635b35f7] <==
	I0914 00:51:02.222172       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 00:51:02.222204       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0914 00:51:02.222498       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 00:51:02.222686       1 shared_informer.go:320] Caches are synced for job
	I0914 00:51:02.222830       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 00:51:02.223156       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 00:51:02.237649       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0914 00:51:02.244602       1 shared_informer.go:320] Caches are synced for PVC protection
	I0914 00:51:02.249527       1 shared_informer.go:320] Caches are synced for crt configmap
	I0914 00:51:02.253404       1 shared_informer.go:320] Caches are synced for ephemeral
	I0914 00:51:02.254674       1 shared_informer.go:320] Caches are synced for deployment
	I0914 00:51:02.258421       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0914 00:51:02.262398       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0914 00:51:02.331584       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:51:02.333179       1 shared_informer.go:320] Caches are synced for attach detach
	I0914 00:51:02.334931       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0914 00:51:02.351348       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:51:02.352473       1 shared_informer.go:320] Caches are synced for PV protection
	I0914 00:51:02.375919       1 shared_informer.go:320] Caches are synced for persistent volume
	I0914 00:51:02.548625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="290.143695ms"
	I0914 00:51:02.578400       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.356864ms"
	I0914 00:51:02.578569       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.817µs"
	I0914 00:51:02.831493       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 00:51:02.831520       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0914 00:51:02.873606       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf] <==
	
	
	==> kube-proxy [52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d] <==
	
	
	==> kube-proxy [5692013ef9998abe65477be53c45f0665b69b980dcc225a8539798a1b5483607] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:50:59.879493       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:50:59.931532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.53"]
	E0914 00:50:59.931608       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:51:00.006886       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:51:00.006936       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:51:00.006964       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:51:00.015459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:51:00.015852       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:51:00.015904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:51:00.017973       1 config.go:199] "Starting service config controller"
	I0914 00:51:00.017999       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:51:00.018025       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:51:00.018029       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:51:00.018496       1 config.go:328] "Starting node config controller"
	I0914 00:51:00.018518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:51:00.118374       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:51:00.118436       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:51:00.118687       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a567707b8572cbe90cc25fdcba29c0a90754614ec38a7a50eef785afb614a8ea] <==
	I0914 00:51:00.090495       1 serving.go:386] Generated self-signed cert in-memory
	I0914 00:51:01.250875       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 00:51:01.250916       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:51:01.260376       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 00:51:01.260831       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0914 00:51:01.260873       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0914 00:51:01.263466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 00:51:01.263644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:51:01.263656       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:51:01.263667       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0914 00:51:01.263672       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 00:51:01.363519       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0914 00:51:01.364505       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 00:51:01.364515       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f] <==
	
	
	==> kubelet <==
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214848    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d260753f79fba711d52fdcb51c0baa68-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-271886\" (UID: \"d260753f79fba711d52fdcb51c0baa68\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214870    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0310b29489cd70cfcac80b7f5c0608eb-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-271886\" (UID: \"0310b29489cd70cfcac80b7f5c0608eb\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214906    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0310b29489cd70cfcac80b7f5c0608eb-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-271886\" (UID: \"0310b29489cd70cfcac80b7f5c0608eb\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214930    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c58d173a30ba425a763d009daafddd5b-etcd-certs\") pod \"etcd-kubernetes-upgrade-271886\" (UID: \"c58d173a30ba425a763d009daafddd5b\") " pod="kube-system/etcd-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214952    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c58d173a30ba425a763d009daafddd5b-etcd-data\") pod \"etcd-kubernetes-upgrade-271886\" (UID: \"c58d173a30ba425a763d009daafddd5b\") " pod="kube-system/etcd-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.214973    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d260753f79fba711d52fdcb51c0baa68-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-271886\" (UID: \"d260753f79fba711d52fdcb51c0baa68\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: E0914 00:50:58.216275    3922 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-271886\" already exists" pod="kube-system/etcd-kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.237701    3922 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.262391    3922 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.262479    3922 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-271886"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.262501    3922 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.264377    3922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.511665    3922 scope.go:117] "RemoveContainer" containerID="b420a306743b433412227cc18be9da6761b5d17afad5fb2235bbfeb679017baf"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.513812    3922 scope.go:117] "RemoveContainer" containerID="b703308201aefaed76007fa1f7d45c83ba4f6c5a75fa088fa803d7544f806d3f"
	Sep 14 00:50:58 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:58.964342    3922 apiserver.go:52] "Watching apiserver"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.003101    3922 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.019085    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5473443-8189-476e-951c-dee698bed6c0-xtables-lock\") pod \"kube-proxy-m7d8q\" (UID: \"f5473443-8189-476e-951c-dee698bed6c0\") " pod="kube-system/kube-proxy-m7d8q"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.019139    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83a3d8d7-5219-4b12-ac01-e1218ae4b90b-tmp\") pod \"storage-provisioner\" (UID: \"83a3d8d7-5219-4b12-ac01-e1218ae4b90b\") " pod="kube-system/storage-provisioner"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.019203    3922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5473443-8189-476e-951c-dee698bed6c0-lib-modules\") pod \"kube-proxy-m7d8q\" (UID: \"f5473443-8189-476e-951c-dee698bed6c0\") " pod="kube-system/kube-proxy-m7d8q"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: E0914 00:50:59.214231    3922 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-271886\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-271886"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.271002    3922 scope.go:117] "RemoveContainer" containerID="1e1cc2ab8e33127dcbebbd3a689b53b6a69714d9a1f5ae102b01bc1c5c132fc4"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.271520    3922 scope.go:117] "RemoveContainer" containerID="e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.271656    3922 scope.go:117] "RemoveContainer" containerID="48ec12fd10eab0fd7f9e669fd1d98de8c20760e6d49e145b95ea5eee84f49c61"
	Sep 14 00:50:59 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:50:59.271831    3922 scope.go:117] "RemoveContainer" containerID="52b6a82d81125175ab1787ff5176a4a23ca5eaea6256dcd1a17f385fe6e4454d"
	Sep 14 00:51:01 kubernetes-upgrade-271886 kubelet[3922]: I0914 00:51:01.538825    3922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [c67e49477737a4edec69b2b7fea90c0908f0ee2bc2c10f2f965e25ee14fe7a69] <==
	I0914 00:50:59.653147       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:50:59.726848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:50:59.726963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:50:59.819911       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:50:59.820673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-271886_2124db86-1d88-4a05-ba79-f0d7340a9780!
	I0914 00:50:59.820504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22d837cc-de57-4d4f-a537-3b35dca28f17", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-271886_2124db86-1d88-4a05-ba79-f0d7340a9780 became leader
	I0914 00:50:59.921247       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-271886_2124db86-1d88-4a05-ba79-f0d7340a9780!
	
	
	==> storage-provisioner [e8ce763a6b4debf78b86a7098e4c056bad9a4007fae0b828b0e0a17948a4ae3e] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:51:01.959752   61563 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19640-5422/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-271886 -n kubernetes-upgrade-271886
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-271886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-271886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-271886
--- FAIL: TestKubernetesUpgrade (365.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (241.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-609507 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0914 00:49:31.535100   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-609507 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (3m56.125571845s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-609507] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-609507" primary control-plane node in "pause-609507" cluster
	* Updating the running kvm2 "pause-609507" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-609507" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:49:19.126053   57689 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:49:19.126185   57689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:49:19.126196   57689 out.go:358] Setting ErrFile to fd 2...
	I0914 00:49:19.126204   57689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:49:19.126376   57689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:49:19.126930   57689 out.go:352] Setting JSON to false
	I0914 00:49:19.127929   57689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5505,"bootTime":1726269454,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:49:19.128031   57689 start.go:139] virtualization: kvm guest
	I0914 00:49:19.130320   57689 out.go:177] * [pause-609507] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:49:19.131624   57689 notify.go:220] Checking for updates...
	I0914 00:49:19.131650   57689 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:49:19.133036   57689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:49:19.134420   57689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:49:19.135905   57689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:49:19.137292   57689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:49:19.138756   57689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:49:19.140548   57689 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:49:19.141162   57689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:49:19.141232   57689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:49:19.156811   57689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0914 00:49:19.157305   57689 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:49:19.157848   57689 main.go:141] libmachine: Using API Version  1
	I0914 00:49:19.157872   57689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:49:19.158321   57689 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:49:19.158499   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:19.158732   57689 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:49:19.159030   57689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:49:19.159064   57689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:49:19.174336   57689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0914 00:49:19.174939   57689 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:49:19.175435   57689 main.go:141] libmachine: Using API Version  1
	I0914 00:49:19.175454   57689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:49:19.175841   57689 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:49:19.176042   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:19.214767   57689 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:49:19.215856   57689 start.go:297] selected driver: kvm2
	I0914 00:49:19.215877   57689 start.go:901] validating driver "kvm2" against &{Name:pause-609507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-609507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:49:19.216018   57689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:49:19.216413   57689 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:49:19.216483   57689 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:49:19.231839   57689 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:49:19.232548   57689 cni.go:84] Creating CNI manager for ""
	I0914 00:49:19.232597   57689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:49:19.232664   57689 start.go:340] cluster config:
	{Name:pause-609507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-609507 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:49:19.232789   57689 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:49:19.234741   57689 out.go:177] * Starting "pause-609507" primary control-plane node in "pause-609507" cluster
	I0914 00:49:19.235975   57689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:49:19.236010   57689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:49:19.236019   57689 cache.go:56] Caching tarball of preloaded images
	I0914 00:49:19.236119   57689 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:49:19.236133   57689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:49:19.236253   57689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/config.json ...
	I0914 00:49:19.236440   57689 start.go:360] acquireMachinesLock for pause-609507: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:49:19.236487   57689 start.go:364] duration metric: took 27.078µs to acquireMachinesLock for "pause-609507"
	I0914 00:49:19.236505   57689 start.go:96] Skipping create...Using existing machine configuration
	I0914 00:49:19.236526   57689 fix.go:54] fixHost starting: 
	I0914 00:49:19.236795   57689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:49:19.236833   57689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:49:19.251264   57689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38749
	I0914 00:49:19.251855   57689 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:49:19.252404   57689 main.go:141] libmachine: Using API Version  1
	I0914 00:49:19.252427   57689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:49:19.252759   57689 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:49:19.252943   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:19.253068   57689 main.go:141] libmachine: (pause-609507) Calling .GetState
	I0914 00:49:19.254893   57689 fix.go:112] recreateIfNeeded on pause-609507: state=Running err=<nil>
	W0914 00:49:19.254916   57689 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 00:49:19.257106   57689 out.go:177] * Updating the running kvm2 "pause-609507" VM ...
	I0914 00:49:19.258667   57689 machine.go:93] provisionDockerMachine start ...
	I0914 00:49:19.258687   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:19.258958   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.261794   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.262194   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.262219   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.262436   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:19.262629   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.262783   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.262899   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:19.263007   57689 main.go:141] libmachine: Using SSH client type: native
	I0914 00:49:19.263202   57689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0914 00:49:19.263214   57689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:49:19.379987   57689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609507
	
	I0914 00:49:19.380022   57689 main.go:141] libmachine: (pause-609507) Calling .GetMachineName
	I0914 00:49:19.380238   57689 buildroot.go:166] provisioning hostname "pause-609507"
	I0914 00:49:19.380276   57689 main.go:141] libmachine: (pause-609507) Calling .GetMachineName
	I0914 00:49:19.380465   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.383407   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.383776   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.383822   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.383950   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:19.384078   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.384225   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.384394   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:19.384578   57689 main.go:141] libmachine: Using SSH client type: native
	I0914 00:49:19.384744   57689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0914 00:49:19.384755   57689 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-609507 && echo "pause-609507" | sudo tee /etc/hostname
	I0914 00:49:19.509368   57689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609507
	
	I0914 00:49:19.509452   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.512259   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.512566   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.512595   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.512767   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:19.512941   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.513129   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.513305   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:19.513452   57689 main.go:141] libmachine: Using SSH client type: native
	I0914 00:49:19.513609   57689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0914 00:49:19.513625   57689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-609507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-609507/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-609507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:49:19.624999   57689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:49:19.625029   57689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:49:19.625077   57689 buildroot.go:174] setting up certificates
	I0914 00:49:19.625102   57689 provision.go:84] configureAuth start
	I0914 00:49:19.625120   57689 main.go:141] libmachine: (pause-609507) Calling .GetMachineName
	I0914 00:49:19.625386   57689 main.go:141] libmachine: (pause-609507) Calling .GetIP
	I0914 00:49:19.628245   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.628717   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.628740   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.628874   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.631018   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.631367   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.631387   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.631629   57689 provision.go:143] copyHostCerts
	I0914 00:49:19.631685   57689 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:49:19.631698   57689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:49:19.631758   57689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:49:19.631933   57689 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:49:19.631945   57689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:49:19.631987   57689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:49:19.632095   57689 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:49:19.632105   57689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:49:19.632140   57689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:49:19.632222   57689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.pause-609507 san=[127.0.0.1 192.168.39.112 localhost minikube pause-609507]
	I0914 00:49:19.725207   57689 provision.go:177] copyRemoteCerts
	I0914 00:49:19.725270   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:49:19.725292   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.727893   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.728273   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.728299   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.728531   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:19.728715   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.728893   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:19.729048   57689 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/pause-609507/id_rsa Username:docker}
	I0914 00:49:19.816080   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:49:19.844348   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:49:19.872659   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:49:19.900250   57689 provision.go:87] duration metric: took 275.110241ms to configureAuth
	I0914 00:49:19.900283   57689 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:49:19.900546   57689 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:49:19.900651   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:19.904106   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.904608   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:19.904637   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:19.904859   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:19.905046   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.905235   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:19.905395   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:19.905546   57689 main.go:141] libmachine: Using SSH client type: native
	I0914 00:49:19.905770   57689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0914 00:49:19.905788   57689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:49:25.427546   57689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:49:25.427577   57689 machine.go:96] duration metric: took 6.168896254s to provisionDockerMachine
	I0914 00:49:25.427591   57689 start.go:293] postStartSetup for "pause-609507" (driver="kvm2")
	I0914 00:49:25.427602   57689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:49:25.427624   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:25.427949   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:49:25.427981   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:25.431150   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.431699   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:25.431726   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.431942   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:25.432148   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:25.432381   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:25.432547   57689 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/pause-609507/id_rsa Username:docker}
	I0914 00:49:25.519226   57689 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:49:25.524321   57689 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:49:25.524348   57689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:49:25.524426   57689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:49:25.524525   57689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:49:25.524649   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:49:25.537097   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:49:25.562422   57689 start.go:296] duration metric: took 134.816949ms for postStartSetup
	I0914 00:49:25.562467   57689 fix.go:56] duration metric: took 6.32594771s for fixHost
	I0914 00:49:25.562492   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:25.565105   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.565585   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:25.565612   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.565776   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:25.566012   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:25.566170   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:25.566348   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:25.566573   57689 main.go:141] libmachine: Using SSH client type: native
	I0914 00:49:25.566799   57689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0914 00:49:25.566817   57689 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:49:25.680228   57689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726274965.667945591
	
	I0914 00:49:25.680263   57689 fix.go:216] guest clock: 1726274965.667945591
	I0914 00:49:25.680272   57689 fix.go:229] Guest: 2024-09-14 00:49:25.667945591 +0000 UTC Remote: 2024-09-14 00:49:25.56247265 +0000 UTC m=+6.479451401 (delta=105.472941ms)
	I0914 00:49:25.680291   57689 fix.go:200] guest clock delta is within tolerance: 105.472941ms
	I0914 00:49:25.680298   57689 start.go:83] releasing machines lock for "pause-609507", held for 6.443800337s
	I0914 00:49:25.680321   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:25.680585   57689 main.go:141] libmachine: (pause-609507) Calling .GetIP
	I0914 00:49:25.683722   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.684065   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:25.684101   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.684245   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:25.684826   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:25.685012   57689 main.go:141] libmachine: (pause-609507) Calling .DriverName
	I0914 00:49:25.685103   57689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:49:25.685150   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:25.685199   57689 ssh_runner.go:195] Run: cat /version.json
	I0914 00:49:25.685219   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHHostname
	I0914 00:49:25.687946   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.688203   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.688300   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:25.688322   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.688492   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:25.688542   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:25.688582   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:25.688659   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:25.688767   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHPort
	I0914 00:49:25.688782   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:25.688972   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHKeyPath
	I0914 00:49:25.688969   57689 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/pause-609507/id_rsa Username:docker}
	I0914 00:49:25.689137   57689 main.go:141] libmachine: (pause-609507) Calling .GetSSHUsername
	I0914 00:49:25.689281   57689 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/pause-609507/id_rsa Username:docker}
	I0914 00:49:25.801861   57689 ssh_runner.go:195] Run: systemctl --version
	I0914 00:49:25.808367   57689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:49:25.967044   57689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:49:25.973787   57689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:49:25.973844   57689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:49:25.983488   57689 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 00:49:25.983514   57689 start.go:495] detecting cgroup driver to use...
	I0914 00:49:25.983576   57689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:49:26.001607   57689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:49:26.016983   57689 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:49:26.017038   57689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:49:26.031307   57689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:49:26.045632   57689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:49:26.174502   57689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:49:26.317413   57689 docker.go:233] disabling docker service ...
	I0914 00:49:26.317482   57689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:49:26.335032   57689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:49:26.349992   57689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:49:26.500077   57689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:49:26.630073   57689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:49:26.643812   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:49:26.666344   57689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:49:26.666443   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.677138   57689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:49:26.677215   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.689059   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.699757   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.710924   57689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:49:26.721672   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.731720   57689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.745625   57689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:49:26.757058   57689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:49:26.766640   57689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:49:26.777121   57689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:49:26.945977   57689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:49:27.384397   57689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:49:27.384492   57689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:49:27.389746   57689 start.go:563] Will wait 60s for crictl version
	I0914 00:49:27.389813   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:49:27.393617   57689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:49:27.439527   57689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:49:27.439607   57689 ssh_runner.go:195] Run: crio --version
	I0914 00:49:27.473401   57689 ssh_runner.go:195] Run: crio --version
	I0914 00:49:27.505116   57689 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:49:27.506347   57689 main.go:141] libmachine: (pause-609507) Calling .GetIP
	I0914 00:49:27.508874   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:27.509301   57689 main.go:141] libmachine: (pause-609507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:04:26", ip: ""} in network mk-pause-609507: {Iface:virbr3 ExpiryTime:2024-09-14 01:48:05 +0000 UTC Type:0 Mac:52:54:00:32:04:26 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:pause-609507 Clientid:01:52:54:00:32:04:26}
	I0914 00:49:27.509332   57689 main.go:141] libmachine: (pause-609507) DBG | domain pause-609507 has defined IP address 192.168.39.112 and MAC address 52:54:00:32:04:26 in network mk-pause-609507
	I0914 00:49:27.509558   57689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 00:49:27.514005   57689 kubeadm.go:883] updating cluster {Name:pause-609507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-609507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:49:27.514166   57689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:49:27.514220   57689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:49:27.557811   57689 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:49:27.557833   57689 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:49:27.557876   57689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:49:27.599358   57689 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:49:27.599378   57689 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:49:27.599389   57689 kubeadm.go:934] updating node { 192.168.39.112 8443 v1.31.1 crio true true} ...
	I0914 00:49:27.599524   57689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-609507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-609507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:49:27.599615   57689 ssh_runner.go:195] Run: crio config
	I0914 00:49:27.645994   57689 cni.go:84] Creating CNI manager for ""
	I0914 00:49:27.646017   57689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:49:27.646027   57689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:49:27.646051   57689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-609507 NodeName:pause-609507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:49:27.646214   57689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-609507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:49:27.646295   57689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:49:27.657255   57689 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:49:27.657330   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:49:27.668534   57689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 00:49:27.685441   57689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:49:27.701188   57689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0914 00:49:27.717694   57689 ssh_runner.go:195] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I0914 00:49:27.721655   57689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:49:27.852525   57689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:49:27.867897   57689 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507 for IP: 192.168.39.112
	I0914 00:49:27.867923   57689 certs.go:194] generating shared ca certs ...
	I0914 00:49:27.867939   57689 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:49:27.868119   57689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:49:27.868178   57689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:49:27.868191   57689 certs.go:256] generating profile certs ...
	I0914 00:49:27.868317   57689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/client.key
	I0914 00:49:27.868391   57689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/apiserver.key.276ae27d
	I0914 00:49:27.868443   57689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/proxy-client.key
	I0914 00:49:27.868626   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:49:27.868664   57689 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:49:27.868678   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:49:27.868713   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:49:27.868741   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:49:27.868774   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:49:27.868827   57689 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:49:27.869424   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:49:27.893310   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:49:27.920940   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:49:27.945720   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:49:28.013902   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:49:28.042937   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 00:49:28.111954   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:49:28.156782   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/pause-609507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 00:49:28.197448   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:49:28.262090   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:49:28.361020   57689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:49:28.461987   57689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:49:28.503089   57689 ssh_runner.go:195] Run: openssl version
	I0914 00:49:28.516664   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:49:28.550631   57689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:49:28.566826   57689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:49:28.566988   57689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:49:28.576112   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:49:28.590857   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:49:28.606062   57689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:49:28.612913   57689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:49:28.612982   57689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:49:28.624898   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:49:28.666416   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:49:28.708370   57689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:49:28.718838   57689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:49:28.718917   57689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:49:28.732329   57689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:49:28.760228   57689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:49:28.768404   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 00:49:28.781163   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 00:49:28.793339   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 00:49:28.805262   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 00:49:28.819420   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 00:49:28.827554   57689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 00:49:28.842923   57689 kubeadm.go:392] StartCluster: {Name:pause-609507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-609507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:49:28.843090   57689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:49:28.843164   57689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:49:28.923025   57689 cri.go:89] found id: "6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	I0914 00:49:28.923054   57689 cri.go:89] found id: "174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:49:28.923060   57689 cri.go:89] found id: "29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1"
	I0914 00:49:28.923065   57689 cri.go:89] found id: "85951213dc27de089b16eb4851aae969b3fc8a84b0e3d7a13f9236d794516eb1"
	I0914 00:49:28.923069   57689 cri.go:89] found id: "8ff2c225cff2e66a2190ad661f41675828944c24e4938b9a5a173341e89bc2f8"
	I0914 00:49:28.923074   57689 cri.go:89] found id: "eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7"
	I0914 00:49:28.923078   57689 cri.go:89] found id: "026c5cef3bbe9cd2552a97eb893866d3e68613e706770c304668dd406846bbed"
	I0914 00:49:28.923081   57689 cri.go:89] found id: "82abc4934eb03aaeb1bf91d5a7ec348c1e333561377cc838e7074904032735f4"
	I0914 00:49:28.923086   57689 cri.go:89] found id: "2e212949739e3a44243932b590064fb115c98a6f3bc69122ed16ae514a0e6d82"
	I0914 00:49:28.923095   57689 cri.go:89] found id: "429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179"
	I0914 00:49:28.923099   57689 cri.go:89] found id: ""
	I0914 00:49:28.923152   57689 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-609507 -n pause-609507
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-609507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-609507 logs -n 25: (1.502810776s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	| start   | -p old-k8s-version-431084                            | old-k8s-version-431084    | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-670449 pgrep -a                           | flannel-670449            | jenkins | v1.34.0 | 14 Sep 24 00:53 UTC |                     |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:52:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:52:35.724587   66801 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:52:35.724870   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.724882   66801 out.go:358] Setting ErrFile to fd 2...
	I0914 00:52:35.724887   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.725072   66801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:52:35.725683   66801 out.go:352] Setting JSON to false
	I0914 00:52:35.726845   66801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5702,"bootTime":1726269454,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:52:35.726957   66801 start.go:139] virtualization: kvm guest
	I0914 00:52:35.729922   66801 out.go:177] * [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:52:35.731826   66801 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:52:35.731857   66801 notify.go:220] Checking for updates...
	I0914 00:52:35.735497   66801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:52:35.737400   66801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:52:35.738944   66801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:35.740627   66801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:52:35.742223   66801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:52:35.744593   66801 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744763   66801 config.go:182] Loaded profile config "flannel-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744950   66801 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.745082   66801 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:52:35.792655   66801 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:52:35.794325   66801 start.go:297] selected driver: kvm2
	I0914 00:52:35.794345   66801 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:52:35.794357   66801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:52:35.795353   66801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.795460   66801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:52:35.812779   66801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:52:35.812843   66801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:52:35.813119   66801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:52:35.813151   66801 cni.go:84] Creating CNI manager for ""
	I0914 00:52:35.813197   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:52:35.813206   66801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 00:52:35.813298   66801 start.go:340] cluster config:
	{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:52:35.813422   66801 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.815527   66801 out.go:177] * Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	I0914 00:52:36.030521   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:36.031056   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:36.031083   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:36.031009   65551 retry.go:31] will retry after 2.413971511s: waiting for machine to come up
	I0914 00:52:38.446532   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:38.447295   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:38.447328   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:38.447229   65551 retry.go:31] will retry after 3.186000225s: waiting for machine to come up
	I0914 00:52:36.093385   63660 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002312016s
	I0914 00:52:36.093493   63660 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:52:35.816967   66801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:52:35.817022   66801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 00:52:35.817033   66801 cache.go:56] Caching tarball of preloaded images
	I0914 00:52:35.817165   66801 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:52:35.817181   66801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 00:52:35.817348   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:52:35.817378   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json: {Name:mk66cd4353dae42258dd8e2fe6f383f65dc09589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:35.817576   66801 start.go:360] acquireMachinesLock for old-k8s-version-431084: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:52:41.093265   63660 kubeadm.go:310] [api-check] The API server is healthy after 5.002550747s
	I0914 00:52:41.106932   63660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:52:41.134962   63660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:52:41.175033   63660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:52:41.175314   63660 kubeadm.go:310] [mark-control-plane] Marking the node flannel-670449 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:52:41.192400   63660 kubeadm.go:310] [bootstrap-token] Using token: m21b40.gixyoiwl4zzeo6il
	I0914 00:52:41.194333   63660 out.go:235]   - Configuring RBAC rules ...
	I0914 00:52:41.194533   63660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:52:41.199774   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:52:41.215531   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:52:41.223597   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:52:41.228568   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:52:41.234464   63660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:52:41.500134   63660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:52:41.925932   63660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:52:42.502870   63660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:52:42.502897   63660 kubeadm.go:310] 
	I0914 00:52:42.502977   63660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:52:42.502989   63660 kubeadm.go:310] 
	I0914 00:52:42.503083   63660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:52:42.503091   63660 kubeadm.go:310] 
	I0914 00:52:42.503118   63660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:52:42.503189   63660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:52:42.503278   63660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:52:42.503287   63660 kubeadm.go:310] 
	I0914 00:52:42.503366   63660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:52:42.503376   63660 kubeadm.go:310] 
	I0914 00:52:42.503473   63660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:52:42.503496   63660 kubeadm.go:310] 
	I0914 00:52:42.503573   63660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:52:42.503674   63660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:52:42.503770   63660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:52:42.503779   63660 kubeadm.go:310] 
	I0914 00:52:42.503906   63660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:52:42.504038   63660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:52:42.504048   63660 kubeadm.go:310] 
	I0914 00:52:42.504169   63660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m21b40.gixyoiwl4zzeo6il \
	I0914 00:52:42.504305   63660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 00:52:42.504336   63660 kubeadm.go:310] 	--control-plane 
	I0914 00:52:42.504351   63660 kubeadm.go:310] 
	I0914 00:52:42.504464   63660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:52:42.504475   63660 kubeadm.go:310] 
	I0914 00:52:42.504582   63660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m21b40.gixyoiwl4zzeo6il \
	I0914 00:52:42.504708   63660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 00:52:42.505158   63660 kubeadm.go:310] W0914 00:52:31.439490     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:42.505425   63660 kubeadm.go:310] W0914 00:52:31.440579     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:42.505558   63660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:52:42.505583   63660 cni.go:84] Creating CNI manager for "flannel"
	I0914 00:52:42.507068   63660 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0914 00:52:42.581213   57689 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (52.945245829s)
	I0914 00:52:42.584252   57689 logs.go:123] Gathering logs for etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] ...
	I0914 00:52:42.584277   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:42.628197   57689 logs.go:123] Gathering logs for kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] ...
	I0914 00:52:42.628232   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:41.634470   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:41.634910   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:41.634933   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:41.634875   65551 retry.go:31] will retry after 4.116962653s: waiting for machine to come up
	I0914 00:52:42.508050   63660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:52:42.513561   63660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:52:42.513577   63660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0914 00:52:42.534460   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:52:42.951907   63660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:52:42.952030   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:42.952068   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-670449 minikube.k8s.io/updated_at=2024_09_14T00_52_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=flannel-670449 minikube.k8s.io/primary=true
	I0914 00:52:43.138454   63660 ops.go:34] apiserver oom_adj: -16
	I0914 00:52:43.138611   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:43.639447   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:44.138955   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:44.638731   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:45.138696   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:45.639231   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.138783   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.639375   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.800566   63660 kubeadm.go:1113] duration metric: took 3.848587036s to wait for elevateKubeSystemPrivileges
	I0914 00:52:46.800608   63660 kubeadm.go:394] duration metric: took 15.544937331s to StartCluster
	I0914 00:52:46.800632   63660 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:46.800721   63660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:52:46.802200   63660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:46.802504   63660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:52:46.802534   63660 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:52:46.802600   63660 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:52:46.802752   63660 addons.go:69] Setting default-storageclass=true in profile "flannel-670449"
	I0914 00:52:46.802780   63660 addons.go:69] Setting storage-provisioner=true in profile "flannel-670449"
	I0914 00:52:46.802790   63660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-670449"
	I0914 00:52:46.802836   63660 addons.go:234] Setting addon storage-provisioner=true in "flannel-670449"
	I0914 00:52:46.802871   63660 host.go:66] Checking if "flannel-670449" exists ...
	I0914 00:52:46.802784   63660 config.go:182] Loaded profile config "flannel-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:46.803394   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.803445   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.803451   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.803493   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.804222   63660 out.go:177] * Verifying Kubernetes components...
	I0914 00:52:46.806107   63660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:46.819765   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0914 00:52:46.819875   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0914 00:52:46.820333   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.820405   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.820896   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.820900   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.820914   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.820918   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.821281   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.821318   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.821502   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.821839   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.821880   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.825594   63660 addons.go:234] Setting addon default-storageclass=true in "flannel-670449"
	I0914 00:52:46.825643   63660 host.go:66] Checking if "flannel-670449" exists ...
	I0914 00:52:46.826029   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.826082   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.837556   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0914 00:52:46.838070   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.838695   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.838726   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.839045   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.839228   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.841167   63660 main.go:141] libmachine: (flannel-670449) Calling .DriverName
	I0914 00:52:46.841408   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33385
	I0914 00:52:46.841937   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.842477   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.842498   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.842897   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.843189   63660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:52:46.843415   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.843451   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.844758   63660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:52:46.844780   63660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:52:46.844809   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHHostname
	I0914 00:52:46.847890   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.848409   63660 main.go:141] libmachine: (flannel-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:66:54", ip: ""} in network mk-flannel-670449: {Iface:virbr4 ExpiryTime:2024-09-14 01:52:15 +0000 UTC Type:0 Mac:52:54:00:15:66:54 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:flannel-670449 Clientid:01:52:54:00:15:66:54}
	I0914 00:52:46.848436   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined IP address 192.168.72.151 and MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.848612   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHPort
	I0914 00:52:46.848796   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHKeyPath
	I0914 00:52:46.848955   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHUsername
	I0914 00:52:46.849128   63660 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/flannel-670449/id_rsa Username:docker}
	I0914 00:52:46.859731   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0914 00:52:46.860330   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.860803   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.860825   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.861151   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.861346   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.863059   63660 main.go:141] libmachine: (flannel-670449) Calling .DriverName
	I0914 00:52:46.863289   63660 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:52:46.863303   63660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:52:46.863317   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHHostname
	I0914 00:52:46.866859   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.867369   63660 main.go:141] libmachine: (flannel-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:66:54", ip: ""} in network mk-flannel-670449: {Iface:virbr4 ExpiryTime:2024-09-14 01:52:15 +0000 UTC Type:0 Mac:52:54:00:15:66:54 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:flannel-670449 Clientid:01:52:54:00:15:66:54}
	I0914 00:52:46.867390   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined IP address 192.168.72.151 and MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.867551   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHPort
	I0914 00:52:46.867717   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHKeyPath
	I0914 00:52:46.867871   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHUsername
	I0914 00:52:46.868000   63660 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/flannel-670449/id_rsa Username:docker}
	I0914 00:52:47.129773   63660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:52:47.129816   63660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:52:47.208206   63660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:52:47.314622   63660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:52:47.619910   63660 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 00:52:47.620848   63660 node_ready.go:35] waiting up to 15m0s for node "flannel-670449" to be "Ready" ...
	I0914 00:52:47.901083   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901110   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901150   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901171   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901548   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.901575   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.901598   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.901607   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.901615   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901621   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901785   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.901804   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.901815   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901823   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901980   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.902028   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.902063   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.902192   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.902204   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.912084   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.912108   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.912390   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.912409   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.914010   63660 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 00:52:45.168586   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:52:47.174812   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 00:52:47.174848   57689 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 00:52:47.174881   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:52:47.174949   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:52:47.222404   57689 cri.go:89] found id: "ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9"
	I0914 00:52:47.222430   57689 cri.go:89] found id: "429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179"
	I0914 00:52:47.222436   57689 cri.go:89] found id: ""
	I0914 00:52:47.222445   57689 logs.go:276] 2 containers: [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9 429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179]
	I0914 00:52:47.222512   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.228116   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.232504   57689 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:52:47.232564   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:52:47.281624   57689 cri.go:89] found id: "8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73"
	I0914 00:52:47.281651   57689 cri.go:89] found id: "174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:47.281658   57689 cri.go:89] found id: ""
	I0914 00:52:47.281668   57689 logs.go:276] 2 containers: [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688]
	I0914 00:52:47.281727   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.285853   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.290892   57689 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:52:47.290970   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:52:47.331869   57689 cri.go:89] found id: "6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	I0914 00:52:47.331895   57689 cri.go:89] found id: ""
	I0914 00:52:47.331905   57689 logs.go:276] 1 containers: [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7]
	I0914 00:52:47.331968   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.335953   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:52:47.336028   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:52:47.377231   57689 cri.go:89] found id: "9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:47.377256   57689 cri.go:89] found id: "29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1"
	I0914 00:52:47.377263   57689 cri.go:89] found id: ""
	I0914 00:52:47.377272   57689 logs.go:276] 2 containers: [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac 29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1]
	I0914 00:52:47.377332   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.381349   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.385995   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:52:47.386065   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:52:47.428313   57689 cri.go:89] found id: "eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7"
	I0914 00:52:47.428341   57689 cri.go:89] found id: ""
	I0914 00:52:47.428350   57689 logs.go:276] 1 containers: [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7]
	I0914 00:52:47.428410   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.432320   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:52:47.432393   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:52:47.466850   57689 cri.go:89] found id: "0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	I0914 00:52:47.466876   57689 cri.go:89] found id: ""
	I0914 00:52:47.466886   57689 logs.go:276] 1 containers: [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969]
	I0914 00:52:47.466948   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.470993   57689 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:52:47.471075   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:52:47.505826   57689 cri.go:89] found id: ""
	I0914 00:52:47.505858   57689 logs.go:276] 0 containers: []
	W0914 00:52:47.505869   57689 logs.go:278] No container was found matching "kindnet"
	I0914 00:52:47.505887   57689 logs.go:123] Gathering logs for etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] ...
	I0914 00:52:47.505901   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:47.561783   57689 logs.go:123] Gathering logs for coredns [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7] ...
	I0914 00:52:47.561837   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	I0914 00:52:47.607123   57689 logs.go:123] Gathering logs for container status ...
	I0914 00:52:47.607162   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:52:47.654787   57689 logs.go:123] Gathering logs for kube-apiserver [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9] ...
	I0914 00:52:47.654834   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9"
	I0914 00:52:47.758213   57689 logs.go:123] Gathering logs for kube-apiserver [429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179] ...
	I0914 00:52:47.758259   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179"
	I0914 00:52:47.809207   57689 logs.go:123] Gathering logs for kubelet ...
	I0914 00:52:47.809253   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 00:52:47.922544   57689 logs.go:123] Gathering logs for dmesg ...
	I0914 00:52:47.922580   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:52:47.938532   57689 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:52:47.938565   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:52:48.058868   57689 logs.go:123] Gathering logs for etcd [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73] ...
	I0914 00:52:48.058911   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73"
	I0914 00:52:48.105331   57689 logs.go:123] Gathering logs for kube-proxy [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7] ...
	I0914 00:52:48.105364   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7"
	I0914 00:52:48.142072   57689 logs.go:123] Gathering logs for kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] ...
	I0914 00:52:48.142098   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:48.176207   57689 logs.go:123] Gathering logs for kube-scheduler [29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1] ...
	I0914 00:52:48.176235   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1"
	I0914 00:52:48.230561   57689 logs.go:123] Gathering logs for kube-controller-manager [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969] ...
	I0914 00:52:48.230598   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	I0914 00:52:48.269943   57689 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:52:48.269982   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:52:45.753768   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:45.754305   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:45.754334   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:45.754233   65551 retry.go:31] will retry after 3.696197004s: waiting for machine to come up
	I0914 00:52:49.453223   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.453699   65519 main.go:141] libmachine: (bridge-670449) Found IP for machine: 192.168.50.31
	I0914 00:52:49.453727   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has current primary IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.453733   65519 main.go:141] libmachine: (bridge-670449) Reserving static IP address...
	I0914 00:52:49.454144   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find host DHCP lease matching {name: "bridge-670449", mac: "52:54:00:f0:d3:6e", ip: "192.168.50.31"} in network mk-bridge-670449
	I0914 00:52:49.541442   65519 main.go:141] libmachine: (bridge-670449) Reserved static IP address: 192.168.50.31
	I0914 00:52:49.541471   65519 main.go:141] libmachine: (bridge-670449) Waiting for SSH to be available...
	I0914 00:52:49.541480   65519 main.go:141] libmachine: (bridge-670449) DBG | Getting to WaitForSSH function...
	I0914 00:52:49.544901   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.545335   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.545357   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.545532   65519 main.go:141] libmachine: (bridge-670449) DBG | Using SSH client type: external
	I0914 00:52:49.545564   65519 main.go:141] libmachine: (bridge-670449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa (-rw-------)
	I0914 00:52:49.545590   65519 main.go:141] libmachine: (bridge-670449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:52:49.545599   65519 main.go:141] libmachine: (bridge-670449) DBG | About to run SSH command:
	I0914 00:52:49.545609   65519 main.go:141] libmachine: (bridge-670449) DBG | exit 0
	I0914 00:52:49.671919   65519 main.go:141] libmachine: (bridge-670449) DBG | SSH cmd err, output: <nil>: 
	I0914 00:52:49.672185   65519 main.go:141] libmachine: (bridge-670449) KVM machine creation complete!
	I0914 00:52:49.672598   65519 main.go:141] libmachine: (bridge-670449) Calling .GetConfigRaw
	I0914 00:52:49.673192   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:49.673397   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:49.673561   65519 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 00:52:49.673578   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:52:49.675295   65519 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 00:52:49.675312   65519 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 00:52:49.675321   65519 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 00:52:49.675330   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.677975   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.678364   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.678407   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.678557   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.678721   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.678862   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.679017   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.679154   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.679404   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.679423   65519 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 00:52:49.783363   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:52:49.783388   65519 main.go:141] libmachine: Detecting the provisioner...
	I0914 00:52:49.783395   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.787254   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.787742   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.787803   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.787987   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.788200   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.788466   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.788669   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.788874   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.789051   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.789067   65519 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 00:52:49.892912   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 00:52:49.893014   65519 main.go:141] libmachine: found compatible host: buildroot
	I0914 00:52:49.893036   65519 main.go:141] libmachine: Provisioning with buildroot...
	I0914 00:52:49.893048   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:49.893313   65519 buildroot.go:166] provisioning hostname "bridge-670449"
	I0914 00:52:49.893337   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:49.893533   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.895960   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.896375   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.896404   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.896565   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.896732   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.896878   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.896979   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.897089   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.897284   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.897298   65519 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-670449 && echo "bridge-670449" | sudo tee /etc/hostname
	I0914 00:52:50.014221   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-670449
	
	I0914 00:52:50.014267   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.017328   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.017692   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.017742   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.017890   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.018078   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.018239   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.018361   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.018531   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:50.018776   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:50.018802   65519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-670449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-670449/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-670449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:52:50.128628   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:52:50.128659   65519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:52:50.128686   65519 buildroot.go:174] setting up certificates
	I0914 00:52:50.128701   65519 provision.go:84] configureAuth start
	I0914 00:52:50.128710   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:50.129001   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:50.132177   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.132627   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.132656   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.132773   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.134959   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.135238   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.135263   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.135429   65519 provision.go:143] copyHostCerts
	I0914 00:52:50.135486   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:52:50.135498   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:52:50.135575   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:52:50.135697   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:52:50.135709   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:52:50.135738   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:52:50.135822   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:52:50.135833   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:52:50.135873   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:52:50.135959   65519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.bridge-670449 san=[127.0.0.1 192.168.50.31 bridge-670449 localhost minikube]
	I0914 00:52:47.915042   63660 addons.go:510] duration metric: took 1.112446056s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0914 00:52:48.123994   63660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-670449" context rescaled to 1 replicas
	I0914 00:52:49.626128   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:51.147099   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:52:51.153123   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0914 00:52:51.159343   57689 api_server.go:141] control plane version: v1.31.1
	I0914 00:52:51.159368   57689 api_server.go:131] duration metric: took 3m9.119642245s to wait for apiserver health ...
	I0914 00:52:51.159376   57689 cni.go:84] Creating CNI manager for ""
	I0914 00:52:51.159382   57689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:52:51.161233   57689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 00:52:51.328508   66801 start.go:364] duration metric: took 15.510868976s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 00:52:51.328574   66801 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:52:51.328690   66801 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 00:52:50.703747   65519 provision.go:177] copyRemoteCerts
	I0914 00:52:50.703845   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:52:50.703886   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.706881   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.707256   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.707283   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.707519   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.707732   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.707909   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.708051   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:50.790117   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:52:50.813667   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:52:50.837862   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:52:50.863630   65519 provision.go:87] duration metric: took 734.9175ms to configureAuth
	I0914 00:52:50.863656   65519 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:52:50.863848   65519 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:50.863928   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.867432   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.867857   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.867878   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.868086   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.868289   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.868419   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.868590   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.868786   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:50.868943   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:50.868956   65519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:52:51.086308   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:52:51.086343   65519 main.go:141] libmachine: Checking connection to Docker...
	I0914 00:52:51.086352   65519 main.go:141] libmachine: (bridge-670449) Calling .GetURL
	I0914 00:52:51.087648   65519 main.go:141] libmachine: (bridge-670449) DBG | Using libvirt version 6000000
	I0914 00:52:51.089747   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.090114   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.090138   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.090316   65519 main.go:141] libmachine: Docker is up and running!
	I0914 00:52:51.090332   65519 main.go:141] libmachine: Reticulating splines...
	I0914 00:52:51.090339   65519 client.go:171] duration metric: took 25.636237149s to LocalClient.Create
	I0914 00:52:51.090360   65519 start.go:167] duration metric: took 25.636338798s to libmachine.API.Create "bridge-670449"
	I0914 00:52:51.090373   65519 start.go:293] postStartSetup for "bridge-670449" (driver="kvm2")
	I0914 00:52:51.090386   65519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:52:51.090403   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.090652   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:52:51.090679   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.092803   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.093149   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.093170   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.093254   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.093411   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.093553   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.093680   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.174415   65519 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:52:51.178609   65519 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:52:51.178631   65519 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:52:51.178691   65519 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:52:51.178774   65519 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:52:51.178886   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:52:51.188254   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:52:51.217990   65519 start.go:296] duration metric: took 127.600471ms for postStartSetup
	I0914 00:52:51.218068   65519 main.go:141] libmachine: (bridge-670449) Calling .GetConfigRaw
	I0914 00:52:51.218735   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:51.221529   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.221968   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.222028   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.222236   65519 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/config.json ...
	I0914 00:52:51.222424   65519 start.go:128] duration metric: took 25.791389492s to createHost
	I0914 00:52:51.222446   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.224953   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.225313   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.225342   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.225513   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.225684   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.225845   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.225960   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.226142   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:51.226312   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:51.226322   65519 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:52:51.328261   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275171.280433236
	
	I0914 00:52:51.328293   65519 fix.go:216] guest clock: 1726275171.280433236
	I0914 00:52:51.328303   65519 fix.go:229] Guest: 2024-09-14 00:52:51.280433236 +0000 UTC Remote: 2024-09-14 00:52:51.222435144 +0000 UTC m=+25.918691991 (delta=57.998092ms)
	I0914 00:52:51.328371   65519 fix.go:200] guest clock delta is within tolerance: 57.998092ms
	I0914 00:52:51.328381   65519 start.go:83] releasing machines lock for "bridge-670449", held for 25.897467372s
	I0914 00:52:51.328433   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.328693   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:51.331858   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.332216   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.332245   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.332437   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.332988   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.333184   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.333272   65519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:52:51.333324   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.333425   65519 ssh_runner.go:195] Run: cat /version.json
	I0914 00:52:51.333450   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.336254   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336423   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336614   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.336639   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336818   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.336841   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.336841   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.337064   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.337065   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.337310   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.337327   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.337450   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.337447   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.337621   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.425713   65519 ssh_runner.go:195] Run: systemctl --version
	I0914 00:52:51.459283   65519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:52:51.631590   65519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:52:51.637349   65519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:52:51.637437   65519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:52:51.654400   65519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:52:51.654427   65519 start.go:495] detecting cgroup driver to use...
	I0914 00:52:51.654497   65519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:52:51.675236   65519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:52:51.691902   65519 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:52:51.691977   65519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:52:51.708339   65519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:52:51.724145   65519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:52:51.880435   65519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:52:52.056731   65519 docker.go:233] disabling docker service ...
	I0914 00:52:52.056810   65519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:52:52.076254   65519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:52:52.094856   65519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:52:52.287880   65519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:52:52.455638   65519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:52:52.475364   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:52:52.496530   65519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:52:52.496600   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.507683   65519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:52:52.507763   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.520778   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.532702   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.544613   65519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:52:52.556678   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.566990   65519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.584466   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.594753   65519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:52:52.606063   65519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:52:52.606136   65519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:52:52.625334   65519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:52:52.638788   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:52.771946   65519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:52:52.889568   65519 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:52:52.889645   65519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:52:52.894380   65519 start.go:563] Will wait 60s for crictl version
	I0914 00:52:52.894450   65519 ssh_runner.go:195] Run: which crictl
	I0914 00:52:52.899436   65519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:52:52.953327   65519 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:52:52.953417   65519 ssh_runner.go:195] Run: crio --version
	I0914 00:52:52.991522   65519 ssh_runner.go:195] Run: crio --version
	I0914 00:52:53.025909   65519 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:52:51.162252   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 00:52:51.173126   57689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 00:52:51.193150   57689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:52:51.193239   57689 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 00:52:51.193257   57689 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 00:52:51.200614   57689 system_pods.go:59] 6 kube-system pods found
	I0914 00:52:51.200646   57689 system_pods.go:61] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 00:52:51.200654   57689 system_pods.go:61] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:52:51.200664   57689 system_pods.go:61] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 00:52:51.200673   57689 system_pods.go:61] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 00:52:51.200683   57689 system_pods.go:61] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 00:52:51.200692   57689 system_pods.go:61] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:52:51.200700   57689 system_pods.go:74] duration metric: took 7.525304ms to wait for pod list to return data ...
	I0914 00:52:51.200714   57689 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:52:51.204808   57689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:52:51.204849   57689 node_conditions.go:123] node cpu capacity is 2
	I0914 00:52:51.204864   57689 node_conditions.go:105] duration metric: took 4.145509ms to run NodePressure ...
	I0914 00:52:51.204885   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:52:53.027300   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:53.031622   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:53.033230   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:53.033271   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:53.033584   65519 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 00:52:53.038756   65519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:52:53.054324   65519 kubeadm.go:883] updating cluster {Name:bridge-670449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:52:53.054461   65519 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:52:53.054550   65519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:52:53.096127   65519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 00:52:53.096198   65519 ssh_runner.go:195] Run: which lz4
	I0914 00:52:53.101323   65519 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 00:52:53.106279   65519 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 00:52:53.106309   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 00:52:54.518632   65519 crio.go:462] duration metric: took 1.417350063s to copy over tarball
	I0914 00:52:54.518708   65519 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 00:52:52.132656   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:54.625439   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:51.330804   66801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 00:52:51.331044   66801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:51.331098   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:51.348179   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0914 00:52:51.348690   66801 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:51.349350   66801 main.go:141] libmachine: Using API Version  1
	I0914 00:52:51.349375   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:51.349795   66801 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:51.349981   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:52:51.350148   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:52:51.350313   66801 start.go:159] libmachine.API.Create for "old-k8s-version-431084" (driver="kvm2")
	I0914 00:52:51.350346   66801 client.go:168] LocalClient.Create starting
	I0914 00:52:51.350381   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0914 00:52:51.350426   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350450   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350517   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0914 00:52:51.350545   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350565   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350590   66801 main.go:141] libmachine: Running pre-create checks...
	I0914 00:52:51.350607   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .PreCreateCheck
	I0914 00:52:51.350931   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:52:51.351327   66801 main.go:141] libmachine: Creating machine...
	I0914 00:52:51.351341   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .Create
	I0914 00:52:51.351507   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating KVM machine...
	I0914 00:52:51.352625   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found existing default KVM network
	I0914 00:52:51.353662   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.353505   66991 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:cd:68} reservation:<nil>}
	I0914 00:52:51.354641   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.354562   66991 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:96:2b} reservation:<nil>}
	I0914 00:52:51.355823   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.355718   66991 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380930}
	I0914 00:52:51.355857   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | created network xml: 
	I0914 00:52:51.355869   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | <network>
	I0914 00:52:51.355875   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <name>mk-old-k8s-version-431084</name>
	I0914 00:52:51.355891   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <dns enable='no'/>
	I0914 00:52:51.355895   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355902   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0914 00:52:51.355907   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     <dhcp>
	I0914 00:52:51.355916   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0914 00:52:51.355920   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     </dhcp>
	I0914 00:52:51.355925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   </ip>
	I0914 00:52:51.355930   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355937   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | </network>
	I0914 00:52:51.355944   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | 
	I0914 00:52:51.364017   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | trying to create private KVM network mk-old-k8s-version-431084 192.168.61.0/24...
	I0914 00:52:51.440773   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.440805   66801 main.go:141] libmachine: (old-k8s-version-431084) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0914 00:52:51.440818   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | private KVM network mk-old-k8s-version-431084 192.168.61.0/24 created
	I0914 00:52:51.440831   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.440744   66991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.440852   66801 main.go:141] libmachine: (old-k8s-version-431084) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0914 00:52:51.735078   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.734905   66991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa...
	I0914 00:52:51.899652   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899507   66991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk...
	I0914 00:52:51.899696   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing magic tar header
	I0914 00:52:51.899714   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing SSH key tar header
	I0914 00:52:51.899726   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899685   66991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.899876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084
	I0914 00:52:51.899901   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0914 00:52:51.899915   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 (perms=drwx------)
	I0914 00:52:51.899925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.899945   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0914 00:52:51.899957   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 00:52:51.899967   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins
	I0914 00:52:51.899975   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home
	I0914 00:52:51.899988   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0914 00:52:51.899998   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Skipping /home - not owner
	I0914 00:52:51.900017   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0914 00:52:51.900030   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0914 00:52:51.900057   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 00:52:51.900075   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 00:52:51.900089   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:51.901527   66801 main.go:141] libmachine: (old-k8s-version-431084) define libvirt domain using xml: 
	I0914 00:52:51.901552   66801 main.go:141] libmachine: (old-k8s-version-431084) <domain type='kvm'>
	I0914 00:52:51.901574   66801 main.go:141] libmachine: (old-k8s-version-431084)   <name>old-k8s-version-431084</name>
	I0914 00:52:51.901605   66801 main.go:141] libmachine: (old-k8s-version-431084)   <memory unit='MiB'>2200</memory>
	I0914 00:52:51.901613   66801 main.go:141] libmachine: (old-k8s-version-431084)   <vcpu>2</vcpu>
	I0914 00:52:51.901626   66801 main.go:141] libmachine: (old-k8s-version-431084)   <features>
	I0914 00:52:51.901633   66801 main.go:141] libmachine: (old-k8s-version-431084)     <acpi/>
	I0914 00:52:51.901644   66801 main.go:141] libmachine: (old-k8s-version-431084)     <apic/>
	I0914 00:52:51.901649   66801 main.go:141] libmachine: (old-k8s-version-431084)     <pae/>
	I0914 00:52:51.901656   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.901675   66801 main.go:141] libmachine: (old-k8s-version-431084)   </features>
	I0914 00:52:51.901691   66801 main.go:141] libmachine: (old-k8s-version-431084)   <cpu mode='host-passthrough'>
	I0914 00:52:51.901731   66801 main.go:141] libmachine: (old-k8s-version-431084)   
	I0914 00:52:51.901751   66801 main.go:141] libmachine: (old-k8s-version-431084)   </cpu>
	I0914 00:52:51.901761   66801 main.go:141] libmachine: (old-k8s-version-431084)   <os>
	I0914 00:52:51.901771   66801 main.go:141] libmachine: (old-k8s-version-431084)     <type>hvm</type>
	I0914 00:52:51.901780   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='cdrom'/>
	I0914 00:52:51.901786   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='hd'/>
	I0914 00:52:51.901795   66801 main.go:141] libmachine: (old-k8s-version-431084)     <bootmenu enable='no'/>
	I0914 00:52:51.901800   66801 main.go:141] libmachine: (old-k8s-version-431084)   </os>
	I0914 00:52:51.901809   66801 main.go:141] libmachine: (old-k8s-version-431084)   <devices>
	I0914 00:52:51.901816   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='cdrom'>
	I0914 00:52:51.901829   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/boot2docker.iso'/>
	I0914 00:52:51.901836   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hdc' bus='scsi'/>
	I0914 00:52:51.901857   66801 main.go:141] libmachine: (old-k8s-version-431084)       <readonly/>
	I0914 00:52:51.901867   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901878   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='disk'>
	I0914 00:52:51.901892   66801 main.go:141] libmachine: (old-k8s-version-431084)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 00:52:51.901909   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk'/>
	I0914 00:52:51.901920   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hda' bus='virtio'/>
	I0914 00:52:51.901931   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901943   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901957   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='mk-old-k8s-version-431084'/>
	I0914 00:52:51.901966   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.901975   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.901982   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901994   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='default'/>
	I0914 00:52:51.902000   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.902010   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.902021   66801 main.go:141] libmachine: (old-k8s-version-431084)     <serial type='pty'>
	I0914 00:52:51.902033   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target port='0'/>
	I0914 00:52:51.902040   66801 main.go:141] libmachine: (old-k8s-version-431084)     </serial>
	I0914 00:52:51.902052   66801 main.go:141] libmachine: (old-k8s-version-431084)     <console type='pty'>
	I0914 00:52:51.902062   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target type='serial' port='0'/>
	I0914 00:52:51.902072   66801 main.go:141] libmachine: (old-k8s-version-431084)     </console>
	I0914 00:52:51.902081   66801 main.go:141] libmachine: (old-k8s-version-431084)     <rng model='virtio'>
	I0914 00:52:51.902091   66801 main.go:141] libmachine: (old-k8s-version-431084)       <backend model='random'>/dev/random</backend>
	I0914 00:52:51.902100   66801 main.go:141] libmachine: (old-k8s-version-431084)     </rng>
	I0914 00:52:51.902107   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902116   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902133   66801 main.go:141] libmachine: (old-k8s-version-431084)   </devices>
	I0914 00:52:51.902144   66801 main.go:141] libmachine: (old-k8s-version-431084) </domain>
	I0914 00:52:51.902155   66801 main.go:141] libmachine: (old-k8s-version-431084) 
	I0914 00:52:51.906817   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:63:e1:fc in network default
	I0914 00:52:51.907735   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 00:52:51.907769   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:51.908690   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 00:52:51.909010   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 00:52:51.909570   66801 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 00:52:51.910517   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:53.472296   66801 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 00:52:53.473458   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.474119   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.474172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.474109   66991 retry.go:31] will retry after 277.653713ms: waiting for machine to come up
	I0914 00:52:53.753876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.755354   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.755382   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.755255   66991 retry.go:31] will retry after 372.557708ms: waiting for machine to come up
	I0914 00:52:54.129933   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.130551   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.130578   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.130504   66991 retry.go:31] will retry after 329.217104ms: waiting for machine to come up
	I0914 00:52:54.461115   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.461742   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.461767   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.461660   66991 retry.go:31] will retry after 534.468325ms: waiting for machine to come up
	I0914 00:52:54.998338   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.999189   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.999215   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.999096   66991 retry.go:31] will retry after 529.424126ms: waiting for machine to come up
	I0914 00:52:55.529670   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:55.530157   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:55.530193   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:55.530103   66991 retry.go:31] will retry after 701.848536ms: waiting for machine to come up
	I0914 00:52:56.925508   65519 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.406768902s)
	I0914 00:52:56.925543   65519 crio.go:469] duration metric: took 2.406883237s to extract the tarball
	I0914 00:52:56.925552   65519 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 00:52:56.975908   65519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:52:57.017587   65519 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:52:57.017610   65519 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:52:57.017620   65519 kubeadm.go:934] updating node { 192.168.50.31 8443 v1.31.1 crio true true} ...
	I0914 00:52:57.017729   65519 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-670449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0914 00:52:57.017808   65519 ssh_runner.go:195] Run: crio config
	I0914 00:52:57.064465   65519 cni.go:84] Creating CNI manager for "bridge"
	I0914 00:52:57.064490   65519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:52:57.064515   65519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-670449 NodeName:bridge-670449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:52:57.064701   65519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-670449"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:52:57.064773   65519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:52:57.075220   65519 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:52:57.075294   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:52:57.084561   65519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 00:52:57.101110   65519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:52:57.119668   65519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0914 00:52:57.136196   65519 ssh_runner.go:195] Run: grep 192.168.50.31	control-plane.minikube.internal$ /etc/hosts
	I0914 00:52:57.140005   65519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:52:57.152566   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:57.274730   65519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:52:57.291839   65519 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449 for IP: 192.168.50.31
	I0914 00:52:57.291859   65519 certs.go:194] generating shared ca certs ...
	I0914 00:52:57.291893   65519 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.292057   65519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:52:57.292117   65519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:52:57.292131   65519 certs.go:256] generating profile certs ...
	I0914 00:52:57.292214   65519 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key
	I0914 00:52:57.292275   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt with IP's: []
	I0914 00:52:57.470771   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt ...
	I0914 00:52:57.470801   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: {Name:mkfae33963ef664b8dafda0c7b72fc834cfda5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.470997   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key ...
	I0914 00:52:57.471012   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key: {Name:mkd1d4d92b1a73a92a82f171f41ed38f2d046626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.471123   65519 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68
	I0914 00:52:57.471140   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.31]
	I0914 00:52:57.952463   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 ...
	I0914 00:52:57.952498   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68: {Name:mk96d20ef2a9061df72d43920f79694c959175bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.952696   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68 ...
	I0914 00:52:57.952713   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68: {Name:mk5fbc43cd82e9fc09c39819ffeaa17abab4487f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.952813   65519 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt
	I0914 00:52:57.952905   65519 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key
	I0914 00:52:57.952964   65519 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key
	I0914 00:52:57.952979   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt with IP's: []
	I0914 00:52:58.138893   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt ...
	I0914 00:52:58.138923   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt: {Name:mk2d161bad34687b448a56b19baf23e332cfbddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:58.139112   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key ...
	I0914 00:52:58.139132   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key: {Name:mk816a55f2fe7fde072a4a7bded931e7c853cfdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:58.139349   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:52:58.139393   65519 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:52:58.139408   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:52:58.139439   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:52:58.139468   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:52:58.139499   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:52:58.139552   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:52:58.140143   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:52:58.180444   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:52:58.210855   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:52:58.236251   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:52:58.259982   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:52:58.286127   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 00:52:58.310403   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:52:58.334106   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:52:58.364875   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:52:58.395178   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:52:58.421481   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:52:58.445049   65519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:52:58.461391   65519 ssh_runner.go:195] Run: openssl version
	I0914 00:52:58.467369   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:52:58.478084   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.484262   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.484330   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.492517   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:52:58.507832   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:52:58.518372   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.523964   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.524024   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.529585   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:52:58.539746   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:52:58.550165   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.554708   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.554777   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.560404   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:52:58.571620   65519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:52:58.575471   65519 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:52:58.575522   65519 kubeadm.go:392] StartCluster: {Name:bridge-670449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:52:58.575599   65519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:52:58.575654   65519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:52:58.619220   65519 cri.go:89] found id: ""
	I0914 00:52:58.619295   65519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:52:58.629608   65519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:52:58.639644   65519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:52:58.650965   65519 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:52:58.650989   65519 kubeadm.go:157] found existing configuration files:
	
	I0914 00:52:58.651037   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:52:58.660834   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:52:58.660896   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:52:58.670505   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:52:58.679253   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:52:58.679339   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:52:58.689420   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:52:58.698449   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:52:58.698523   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:52:58.707910   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:52:58.716706   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:52:58.716759   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:52:58.725842   65519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:52:58.780499   65519 kubeadm.go:310] W0914 00:52:58.731528     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:58.781527   65519 kubeadm.go:310] W0914 00:52:58.732757     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:58.906312   65519 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:52:55.630508   63660 node_ready.go:49] node "flannel-670449" has status "Ready":"True"
	I0914 00:52:55.630539   63660 node_ready.go:38] duration metric: took 8.009665155s for node "flannel-670449" to be "Ready" ...
	I0914 00:52:55.630551   63660 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:52:55.641111   63660 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace to be "Ready" ...
	I0914 00:52:57.647983   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:52:59.730897   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:52:56.234175   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:56.234644   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:56.234675   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:56.234584   66991 retry.go:31] will retry after 926.106578ms: waiting for machine to come up
	I0914 00:52:57.162172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:57.162686   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:57.162715   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:57.162647   66991 retry.go:31] will retry after 1.270446243s: waiting for machine to come up
	I0914 00:52:58.435104   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:58.435636   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:58.435665   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:58.435587   66991 retry.go:31] will retry after 1.16744392s: waiting for machine to come up
	I0914 00:52:59.604970   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:59.605514   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:59.605541   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:59.605457   66991 retry.go:31] will retry after 1.768720127s: waiting for machine to come up
	I0914 00:53:01.300438   57689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (10.095520546s)
	I0914 00:53:01.300489   57689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 00:53:01.306293   57689 kubeadm.go:739] kubelet initialised
	I0914 00:53:01.306326   57689 kubeadm.go:740] duration metric: took 5.824594ms waiting for restarted kubelet to initialise ...
	I0914 00:53:01.306338   57689 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:01.313577   57689 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.323988   57689 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.324021   57689 pod_ready.go:82] duration metric: took 10.407598ms for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.324038   57689 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.330569   57689 pod_ready.go:93] pod "etcd-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.330597   57689 pod_ready.go:82] duration metric: took 6.5482ms for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.330609   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.337384   57689 pod_ready.go:93] pod "kube-apiserver-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.337411   57689 pod_ready.go:82] duration metric: took 6.793361ms for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.337426   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:03.346882   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:02.147998   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:04.149019   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:01.375890   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:01.376460   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:01.376502   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:01.376418   66991 retry.go:31] will retry after 2.152913439s: waiting for machine to come up
	I0914 00:53:03.530645   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:03.531243   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:03.531267   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:03.531195   66991 retry.go:31] will retry after 2.194352636s: waiting for machine to come up
	I0914 00:53:08.387115   65519 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:53:08.387167   65519 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:53:08.387299   65519 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:53:08.387408   65519 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:53:08.387494   65519 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:53:08.387556   65519 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:53:08.388990   65519 out.go:235]   - Generating certificates and keys ...
	I0914 00:53:08.389061   65519 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:53:08.389122   65519 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:53:08.389212   65519 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:53:08.389275   65519 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:53:08.389364   65519 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:53:08.389435   65519 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:53:08.389502   65519 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:53:08.389660   65519 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-670449 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0914 00:53:08.389732   65519 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:53:08.389930   65519 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-670449 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0914 00:53:08.390001   65519 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:53:08.390069   65519 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:53:08.390130   65519 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:53:08.390218   65519 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:53:08.390273   65519 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:53:08.390326   65519 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:53:08.390373   65519 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:53:08.390446   65519 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:53:08.390512   65519 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:53:08.390602   65519 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:53:08.390692   65519 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:53:08.392172   65519 out.go:235]   - Booting up control plane ...
	I0914 00:53:08.392256   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:53:08.392361   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:53:08.392455   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:53:08.392560   65519 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:53:08.392639   65519 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:53:08.392674   65519 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:53:08.392789   65519 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:53:08.392880   65519 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:53:08.392946   65519 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.829153ms
	I0914 00:53:08.393036   65519 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:53:08.393123   65519 kubeadm.go:310] [api-check] The API server is healthy after 5.00176527s
	I0914 00:53:08.393274   65519 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:53:08.393457   65519 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:53:08.393544   65519 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:53:08.393733   65519 kubeadm.go:310] [mark-control-plane] Marking the node bridge-670449 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:53:08.393785   65519 kubeadm.go:310] [bootstrap-token] Using token: s2sitp.rzxqwa1q7sidpzu1
	I0914 00:53:08.395050   65519 out.go:235]   - Configuring RBAC rules ...
	I0914 00:53:08.395193   65519 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:53:08.395324   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:53:08.395481   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:53:08.395659   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:53:08.395769   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:53:08.395893   65519 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:53:08.396023   65519 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:53:08.396091   65519 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:53:08.396150   65519 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:53:08.396159   65519 kubeadm.go:310] 
	I0914 00:53:08.396233   65519 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:53:08.396245   65519 kubeadm.go:310] 
	I0914 00:53:08.396381   65519 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:53:08.396400   65519 kubeadm.go:310] 
	I0914 00:53:08.396442   65519 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:53:08.396521   65519 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:53:08.396590   65519 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:53:08.396605   65519 kubeadm.go:310] 
	I0914 00:53:08.396662   65519 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:53:08.396674   65519 kubeadm.go:310] 
	I0914 00:53:08.396743   65519 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:53:08.396760   65519 kubeadm.go:310] 
	I0914 00:53:08.396820   65519 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:53:08.396916   65519 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:53:08.396999   65519 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:53:08.397016   65519 kubeadm.go:310] 
	I0914 00:53:08.397100   65519 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:53:08.397193   65519 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:53:08.397205   65519 kubeadm.go:310] 
	I0914 00:53:08.397296   65519 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s2sitp.rzxqwa1q7sidpzu1 \
	I0914 00:53:08.397387   65519 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 00:53:08.397407   65519 kubeadm.go:310] 	--control-plane 
	I0914 00:53:08.397411   65519 kubeadm.go:310] 
	I0914 00:53:08.397480   65519 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:53:08.397486   65519 kubeadm.go:310] 
	I0914 00:53:08.397581   65519 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s2sitp.rzxqwa1q7sidpzu1 \
	I0914 00:53:08.397725   65519 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 00:53:08.397741   65519 cni.go:84] Creating CNI manager for "bridge"
	I0914 00:53:08.400000   65519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 00:53:05.844330   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:08.344248   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:06.649281   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:08.146989   63660 pod_ready.go:93] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.147012   63660 pod_ready.go:82] duration metric: took 12.505857681s for pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.147026   63660 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.150834   63660 pod_ready.go:93] pod "etcd-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.150856   63660 pod_ready.go:82] duration metric: took 3.822883ms for pod "etcd-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.150867   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.155749   63660 pod_ready.go:93] pod "kube-apiserver-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.155766   63660 pod_ready.go:82] duration metric: took 4.892502ms for pod "kube-apiserver-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.155778   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.159630   63660 pod_ready.go:93] pod "kube-controller-manager-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.159654   63660 pod_ready.go:82] duration metric: took 3.851407ms for pod "kube-controller-manager-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.159668   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-x74lz" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.163809   63660 pod_ready.go:93] pod "kube-proxy-x74lz" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.163830   63660 pod_ready.go:82] duration metric: took 4.154557ms for pod "kube-proxy-x74lz" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.163840   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.545884   63660 pod_ready.go:93] pod "kube-scheduler-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.545914   63660 pod_ready.go:82] duration metric: took 382.066072ms for pod "kube-scheduler-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.545926   63660 pod_ready.go:39] duration metric: took 12.915347583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:08.545939   63660 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:53:08.545987   63660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:53:08.561058   63660 api_server.go:72] duration metric: took 21.758482762s to wait for apiserver process to appear ...
	I0914 00:53:08.561088   63660 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:53:08.561111   63660 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0914 00:53:08.566419   63660 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0914 00:53:08.567390   63660 api_server.go:141] control plane version: v1.31.1
	I0914 00:53:08.567415   63660 api_server.go:131] duration metric: took 6.320118ms to wait for apiserver health ...
	I0914 00:53:08.567424   63660 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:53:08.748094   63660 system_pods.go:59] 7 kube-system pods found
	I0914 00:53:08.748128   63660 system_pods.go:61] "coredns-7c65d6cfc9-tm2ff" [31003b21-1677-433c-a949-70b7f1890ac4] Running
	I0914 00:53:08.748137   63660 system_pods.go:61] "etcd-flannel-670449" [faa35475-fa7c-4330-b73a-8960699360aa] Running
	I0914 00:53:08.748142   63660 system_pods.go:61] "kube-apiserver-flannel-670449" [da3a74e7-8805-4ff0-b3ad-374c17a275d9] Running
	I0914 00:53:08.748147   63660 system_pods.go:61] "kube-controller-manager-flannel-670449" [7af6638a-2187-4ed1-ad59-f34fbdc221a6] Running
	I0914 00:53:08.748152   63660 system_pods.go:61] "kube-proxy-x74lz" [ae50b997-6893-4038-80e9-909762ffafdb] Running
	I0914 00:53:08.748156   63660 system_pods.go:61] "kube-scheduler-flannel-670449" [303fa421-2d64-4f1f-9ad7-73d9bf1d193e] Running
	I0914 00:53:08.748160   63660 system_pods.go:61] "storage-provisioner" [325fa443-8cd6-4168-8e04-4be556773543] Running
	I0914 00:53:08.748168   63660 system_pods.go:74] duration metric: took 180.737829ms to wait for pod list to return data ...
	I0914 00:53:08.748178   63660 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:53:08.945680   63660 default_sa.go:45] found service account: "default"
	I0914 00:53:08.945718   63660 default_sa.go:55] duration metric: took 197.531742ms for default service account to be created ...
	I0914 00:53:08.945730   63660 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:53:09.148839   63660 system_pods.go:86] 7 kube-system pods found
	I0914 00:53:09.148866   63660 system_pods.go:89] "coredns-7c65d6cfc9-tm2ff" [31003b21-1677-433c-a949-70b7f1890ac4] Running
	I0914 00:53:09.148901   63660 system_pods.go:89] "etcd-flannel-670449" [faa35475-fa7c-4330-b73a-8960699360aa] Running
	I0914 00:53:09.148907   63660 system_pods.go:89] "kube-apiserver-flannel-670449" [da3a74e7-8805-4ff0-b3ad-374c17a275d9] Running
	I0914 00:53:09.148916   63660 system_pods.go:89] "kube-controller-manager-flannel-670449" [7af6638a-2187-4ed1-ad59-f34fbdc221a6] Running
	I0914 00:53:09.148920   63660 system_pods.go:89] "kube-proxy-x74lz" [ae50b997-6893-4038-80e9-909762ffafdb] Running
	I0914 00:53:09.148924   63660 system_pods.go:89] "kube-scheduler-flannel-670449" [303fa421-2d64-4f1f-9ad7-73d9bf1d193e] Running
	I0914 00:53:09.148929   63660 system_pods.go:89] "storage-provisioner" [325fa443-8cd6-4168-8e04-4be556773543] Running
	I0914 00:53:09.148934   63660 system_pods.go:126] duration metric: took 203.199763ms to wait for k8s-apps to be running ...
	I0914 00:53:09.148943   63660 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:53:09.148988   63660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:53:09.168619   63660 system_svc.go:56] duration metric: took 19.667361ms WaitForService to wait for kubelet
	I0914 00:53:09.168644   63660 kubeadm.go:582] duration metric: took 22.366082733s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:53:09.168660   63660 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:53:09.346444   63660 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:53:09.346476   63660 node_conditions.go:123] node cpu capacity is 2
	I0914 00:53:09.346490   63660 node_conditions.go:105] duration metric: took 177.825586ms to run NodePressure ...
	I0914 00:53:09.346507   63660 start.go:241] waiting for startup goroutines ...
	I0914 00:53:09.346515   63660 start.go:246] waiting for cluster config update ...
	I0914 00:53:09.346527   63660 start.go:255] writing updated cluster config ...
	I0914 00:53:09.346769   63660 ssh_runner.go:195] Run: rm -f paused
	I0914 00:53:09.395441   63660 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:53:09.397402   63660 out.go:177] * Done! kubectl is now configured to use "flannel-670449" cluster and "default" namespace by default
	I0914 00:53:08.401126   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 00:53:08.413582   65519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 00:53:08.433052   65519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:53:08.433141   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:08.433154   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-670449 minikube.k8s.io/updated_at=2024_09_14T00_53_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=bridge-670449 minikube.k8s.io/primary=true
	I0914 00:53:08.578648   65519 ops.go:34] apiserver oom_adj: -16
	I0914 00:53:08.578774   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:09.079901   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:09.579694   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:10.079370   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:05.728371   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:05.728822   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:05.728843   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:05.728779   66991 retry.go:31] will retry after 3.501013157s: waiting for machine to come up
	I0914 00:53:09.231390   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:09.232039   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:09.232061   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:09.231992   66991 retry.go:31] will retry after 4.974590479s: waiting for machine to come up
	I0914 00:53:10.345071   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:12.344418   57689 pod_ready.go:93] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.344442   57689 pod_ready.go:82] duration metric: took 11.007008667s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.344451   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.350053   57689 pod_ready.go:93] pod "kube-proxy-djqjf" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.350076   57689 pod_ready.go:82] duration metric: took 5.618815ms for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.350085   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.355857   57689 pod_ready.go:93] pod "kube-scheduler-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.355879   57689 pod_ready.go:82] duration metric: took 5.787493ms for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.355887   57689 pod_ready.go:39] duration metric: took 11.049537682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:12.355907   57689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:53:12.368790   57689 ops.go:34] apiserver oom_adj: -16
	I0914 00:53:12.368811   57689 kubeadm.go:597] duration metric: took 3m43.322570748s to restartPrimaryControlPlane
	I0914 00:53:12.368820   57689 kubeadm.go:394] duration metric: took 3m43.525907543s to StartCluster
	I0914 00:53:12.368836   57689 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.368936   57689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:53:12.369836   57689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.370082   57689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:53:12.370150   57689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:53:12.370331   57689 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:53:12.371618   57689 out.go:177] * Verifying Kubernetes components...
	I0914 00:53:12.372284   57689 out.go:177] * Enabled addons: 
	I0914 00:53:10.578807   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:11.079442   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:11.579475   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.079216   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.579698   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.701611   65519 kubeadm.go:1113] duration metric: took 4.268537321s to wait for elevateKubeSystemPrivileges
	I0914 00:53:12.701651   65519 kubeadm.go:394] duration metric: took 14.126132772s to StartCluster
	I0914 00:53:12.701671   65519 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.701756   65519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:53:12.702785   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.703000   65519 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:53:12.703033   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:53:12.703087   65519 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:53:12.703181   65519 addons.go:69] Setting storage-provisioner=true in profile "bridge-670449"
	I0914 00:53:12.703200   65519 addons.go:234] Setting addon storage-provisioner=true in "bridge-670449"
	I0914 00:53:12.703199   65519 addons.go:69] Setting default-storageclass=true in profile "bridge-670449"
	I0914 00:53:12.703222   65519 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:53:12.703229   65519 host.go:66] Checking if "bridge-670449" exists ...
	I0914 00:53:12.703232   65519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-670449"
	I0914 00:53:12.703671   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.703697   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.703722   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.703751   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.704767   65519 out.go:177] * Verifying Kubernetes components...
	I0914 00:53:12.706226   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:12.719766   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33419
	I0914 00:53:12.720024   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0914 00:53:12.720241   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.720604   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.720797   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.720817   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.721104   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.721131   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.721163   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.721469   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.721638   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.721702   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.721738   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.725490   65519 addons.go:234] Setting addon default-storageclass=true in "bridge-670449"
	I0914 00:53:12.725544   65519 host.go:66] Checking if "bridge-670449" exists ...
	I0914 00:53:12.725929   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.725963   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.739911   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0914 00:53:12.740329   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.741071   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.741100   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.741497   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0914 00:53:12.741650   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.741882   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.741961   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.742457   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.742476   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.743027   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.743703   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.743735   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.745022   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:53:12.746862   65519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:53:12.373191   57689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:12.373831   57689 addons.go:510] duration metric: took 3.682189ms for enable addons: enabled=[]
	I0914 00:53:12.543638   57689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:53:12.562862   57689 node_ready.go:35] waiting up to 6m0s for node "pause-609507" to be "Ready" ...
	I0914 00:53:12.566498   57689 node_ready.go:49] node "pause-609507" has status "Ready":"True"
	I0914 00:53:12.566522   57689 node_ready.go:38] duration metric: took 3.626071ms for node "pause-609507" to be "Ready" ...
	I0914 00:53:12.566531   57689 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:12.570847   57689 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.575554   57689 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.575574   57689 pod_ready.go:82] duration metric: took 4.700003ms for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.575583   57689 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.743080   57689 pod_ready.go:93] pod "etcd-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.743100   57689 pod_ready.go:82] duration metric: took 167.511528ms for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.743118   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.142378   57689 pod_ready.go:93] pod "kube-apiserver-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.142411   57689 pod_ready.go:82] duration metric: took 399.284578ms for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.142422   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.541914   57689 pod_ready.go:93] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.541943   57689 pod_ready.go:82] duration metric: took 399.514151ms for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.541956   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.943146   57689 pod_ready.go:93] pod "kube-proxy-djqjf" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.943169   57689 pod_ready.go:82] duration metric: took 401.20562ms for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.943179   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.747973   65519 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:53:12.747987   65519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:53:12.748001   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:53:12.751069   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.751448   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:53:12.751467   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.751714   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:53:12.751940   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:53:12.752153   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:53:12.752296   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:53:12.760286   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0914 00:53:12.760749   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.761313   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.761339   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.761645   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.761824   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.763530   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:53:12.763730   65519 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:53:12.763747   65519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:53:12.763763   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:53:12.767041   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.767558   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:53:12.767585   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.767841   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:53:12.767992   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:53:12.768078   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:53:12.768176   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:53:12.956433   65519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:53:12.956665   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:53:12.981089   65519 node_ready.go:35] waiting up to 15m0s for node "bridge-670449" to be "Ready" ...
	I0914 00:53:13.001461   65519 node_ready.go:49] node "bridge-670449" has status "Ready":"True"
	I0914 00:53:13.001489   65519 node_ready.go:38] duration metric: took 20.372776ms for node "bridge-670449" to be "Ready" ...
	I0914 00:53:13.001502   65519 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:13.025192   65519 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-cw297" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.187242   65519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:53:13.234703   65519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:53:13.453862   65519 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0914 00:53:13.659692   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.659719   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.660027   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.660045   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.660054   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.660061   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.660317   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.660325   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:13.660332   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.665507   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.665531   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.665791   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.665810   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.962125   65519 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-670449" context rescaled to 1 replicas
	I0914 00:53:14.296626   65519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061879112s)
	I0914 00:53:14.296685   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:14.296700   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:14.297714   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:14.297739   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:14.297754   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:14.297763   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:14.297770   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:14.298051   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:14.298098   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:14.298116   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:14.300137   65519 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 00:53:14.342880   57689 pod_ready.go:93] pod "kube-scheduler-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:14.342906   57689 pod_ready.go:82] duration metric: took 399.720014ms for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:14.342917   57689 pod_ready.go:39] duration metric: took 1.776376455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:14.342935   57689 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:53:14.342993   57689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:53:14.362069   57689 api_server.go:72] duration metric: took 1.991949805s to wait for apiserver process to appear ...
	I0914 00:53:14.362098   57689 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:53:14.362121   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:53:14.366529   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0914 00:53:14.367413   57689 api_server.go:141] control plane version: v1.31.1
	I0914 00:53:14.367438   57689 api_server.go:131] duration metric: took 5.332951ms to wait for apiserver health ...
	I0914 00:53:14.367449   57689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:53:14.544956   57689 system_pods.go:59] 6 kube-system pods found
	I0914 00:53:14.544989   57689 system_pods.go:61] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running
	I0914 00:53:14.544997   57689 system_pods.go:61] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:53:14.545002   57689 system_pods.go:61] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running
	I0914 00:53:14.545008   57689 system_pods.go:61] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running
	I0914 00:53:14.545014   57689 system_pods.go:61] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running
	I0914 00:53:14.545019   57689 system_pods.go:61] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:53:14.545027   57689 system_pods.go:74] duration metric: took 177.570103ms to wait for pod list to return data ...
	I0914 00:53:14.545040   57689 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:53:14.743308   57689 default_sa.go:45] found service account: "default"
	I0914 00:53:14.743342   57689 default_sa.go:55] duration metric: took 198.291849ms for default service account to be created ...
	I0914 00:53:14.743355   57689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:53:14.944071   57689 system_pods.go:86] 6 kube-system pods found
	I0914 00:53:14.944097   57689 system_pods.go:89] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running
	I0914 00:53:14.944103   57689 system_pods.go:89] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:53:14.944106   57689 system_pods.go:89] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running
	I0914 00:53:14.944110   57689 system_pods.go:89] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running
	I0914 00:53:14.944113   57689 system_pods.go:89] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running
	I0914 00:53:14.944116   57689 system_pods.go:89] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:53:14.944125   57689 system_pods.go:126] duration metric: took 200.763397ms to wait for k8s-apps to be running ...
	I0914 00:53:14.944134   57689 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:53:14.944183   57689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:53:14.959855   57689 system_svc.go:56] duration metric: took 15.71469ms WaitForService to wait for kubelet
	I0914 00:53:14.959881   57689 kubeadm.go:582] duration metric: took 2.58977181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:53:14.959897   57689 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:53:15.142221   57689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:53:15.142244   57689 node_conditions.go:123] node cpu capacity is 2
	I0914 00:53:15.142253   57689 node_conditions.go:105] duration metric: took 182.351283ms to run NodePressure ...
	I0914 00:53:15.142265   57689 start.go:241] waiting for startup goroutines ...
	I0914 00:53:15.142274   57689 start.go:246] waiting for cluster config update ...
	I0914 00:53:15.142284   57689 start.go:255] writing updated cluster config ...
	I0914 00:53:15.142577   57689 ssh_runner.go:195] Run: rm -f paused
	I0914 00:53:15.191743   57689 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:53:15.193710   57689 out.go:177] * Done! kubectl is now configured to use "pause-609507" cluster and "default" namespace by default
	W0914 00:53:15.199342   57689 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 9d4b0fec-5fe5-4cd6-a080-7e3a4dd20052
	I0914 00:53:14.301687   65519 addons.go:510] duration metric: took 1.598584871s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0914 00:53:15.031097   65519 pod_ready.go:103] pod "coredns-7c65d6cfc9-cw297" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.969853037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70123913-4533-4f7a-9bd5-9a4e3d75bf3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.987715986Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dee0a556-56a2-44ce-b6ef-76949a4f3acf name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.987957910Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&PodSandboxMetadata{Name:kube-proxy-djqjf,Uid:ca94aecb-0013-45fc-b541-7d11e5f7089e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726275171628961630,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:49:42.948920429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-609507,Uid:e799416ae93e2f6eb005dc1e61fbd714,Na
mespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274972196538669,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.112:8443,kubernetes.io/config.hash: e799416ae93e2f6eb005dc1e61fbd714,kubernetes.io/config.seen: 2024-09-14T00:48:29.546858304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&PodSandboxMetadata{Name:etcd-pause-609507,Uid:35d9b11b4ec540257a59479195eaf4d6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274968217339701,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.112:2379,kubernetes.io/config.hash: 35d9b11b4ec540257a59479195eaf4d6,kubernetes.io/config.seen: 2024-09-14T00:48:29.546853050Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjdnr,Uid:17391162-c95e-489d-825a-a869da462757,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274968092672327,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:48:34.945681446Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-609507,Uid:8d87185065e5c5b732f996180cc6b281,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726274967982033623,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d87185065e5c5b732f996180cc6b281,kubernetes.io/config.seen: 2024-09-14T00:48:29.546860849Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-609507,Uid:d2631685558a653ccf0023b0a3630f45,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726274967979211406,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2631685558a653ccf0023b0a3630f45,kubernetes.io/config.seen: 2024-09-14T00:48:29.546861971Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&PodSandboxMetadata{Name:kube-proxy-djqjf,Uid:ca94aecb-0013-45fc-b541-7d11e5f7089e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726274914844426755,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-14T00:48:34.528328861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-609507,Uid:e799416ae93e2f6eb005dc1e61fbd714,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726274904139666069,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.112:8443,kubernetes.io/config.hash: e799416ae93e2f6eb005dc1e61fbd714,kubernetes.io/config.seen: 2024-09-14T00:48:23.670138194Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dee0a556-56a2-44ce-b6ef-76949a4f3acf name=/runti
me.v1.RuntimeService/ListPodSandbox
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.988481263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58c68a61-0a4e-4234-ba25-da11df88653c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.988583519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58c68a61-0a4e-4234-ba25-da11df88653c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:15 pause-609507 crio[2371]: time="2024-09-14 00:53:15.988894266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58c68a61-0a4e-4234-ba25-da11df88653c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.015702465Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0e2756b-675a-41f0-8371-471c3cb82fd6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.016013840Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&PodSandboxMetadata{Name:kube-proxy-djqjf,Uid:ca94aecb-0013-45fc-b541-7d11e5f7089e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726275171628961630,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:49:42.948920429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-609507,Uid:e799416ae93e2f6eb005dc1e61fbd714,Na
mespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274972196538669,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.112:8443,kubernetes.io/config.hash: e799416ae93e2f6eb005dc1e61fbd714,kubernetes.io/config.seen: 2024-09-14T00:48:29.546858304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&PodSandboxMetadata{Name:etcd-pause-609507,Uid:35d9b11b4ec540257a59479195eaf4d6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274968217339701,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.112:2379,kubernetes.io/config.hash: 35d9b11b4ec540257a59479195eaf4d6,kubernetes.io/config.seen: 2024-09-14T00:48:29.546853050Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjdnr,Uid:17391162-c95e-489d-825a-a869da462757,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726274968092672327,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T00:48:34.945681446Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-609507,Uid:8d87185065e5c5b732f996180cc6b281,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726274967982033623,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d87185065e5c5b732f996180cc6b281,kubernetes.io/config.seen: 2024-09-14T00:48:29.546860849Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-609507,Uid:d2631685558a653ccf0023b0a3630f45,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726274967979211406,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2631685558a653ccf0023b0a3630f45,kubernetes.io/config.seen: 2024-09-14T00:48:29.546861971Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c0e2756b-675a-41f0-8371-471c3cb82fd6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.016833205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f2b821e-670b-4337-b30e-827c4784d095 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.016888743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f2b821e-670b-4337-b30e-827c4784d095 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.017065281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d
6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f2b821e-670b-4337-b30e-827c4784d095 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.023519491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06c0d823-a53b-42dc-bed5-a683bdab1701 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.023620820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06c0d823-a53b-42dc-bed5-a683bdab1701 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.024771933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a06c3f40-37ad-4a06-a11f-f71e2f20a981 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.025193176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275196025170689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a06c3f40-37ad-4a06-a11f-f71e2f20a981 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.025855005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a76229f9-c324-473c-a044-970e9dea692e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.025948825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a76229f9-c324-473c-a044-970e9dea692e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.026364549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a76229f9-c324-473c-a044-970e9dea692e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.075532024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8307d3b0-be1c-42da-9ebb-71aa503cbdce name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.075690401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8307d3b0-be1c-42da-9ebb-71aa503cbdce name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.076800260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f69768d-df74-4386-a6c4-8554ee6e42d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.077184548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275196077152941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f69768d-df74-4386-a6c4-8554ee6e42d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.077723896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a639c70-1049-4899-ac1f-ad1b684a87e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.077786579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a639c70-1049-4899-ac1f-ad1b684a87e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:16 pause-609507 crio[2371]: time="2024-09-14 00:53:16.078043205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a639c70-1049-4899-ac1f-ad1b684a87e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dafed29445983       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 seconds ago       Running             kube-controller-manager   4                   7cfac8b33aa9b       kube-controller-manager-pause-609507
	9580d0349e197       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   24 seconds ago       Running             kube-proxy                1                   d8e5011ba0b25       kube-proxy-djqjf
	a743e8a4eff31       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   24 seconds ago       Running             coredns                   2                   07b81ac60df45       coredns-7c65d6cfc9-jjdnr
	0fb6adc7c77a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   3                   7cfac8b33aa9b       kube-controller-manager-pause-609507
	8cffdea91bc4f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   bafa2d3bb69f2       etcd-pause-609507
	9b0bfff8e7f47       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            2                   5c25f762714ec       kube-scheduler-pause-609507
	ae425e0fa034b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   3 minutes ago        Running             kube-apiserver            1                   b710b132614c2       kube-apiserver-pause-609507
	6093ecd4b31b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 minutes ago        Exited              coredns                   1                   07b81ac60df45       coredns-7c65d6cfc9-jjdnr
	174bce9f1ea9d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   3 minutes ago        Exited              etcd                      1                   bafa2d3bb69f2       etcd-pause-609507
	29b9d75e659af       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   3 minutes ago        Exited              kube-scheduler            1                   5c25f762714ec       kube-scheduler-pause-609507
	eafac013bbe30       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   4 minutes ago        Exited              kube-proxy                0                   e98833148ac1f       kube-proxy-djqjf
	429b60dcec5b5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   4 minutes ago        Exited              kube-apiserver            0                   160185de1867f       kube-apiserver-pause-609507
	
	
	==> coredns [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33108 - 35724 "HINFO IN 4099614065336019101.5833173758431508590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014879208s
	
	
	==> coredns [a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36818 - 55484 "HINFO IN 4869981132163530117.679535294579518781. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009512125s
	
	
	==> describe nodes <==
	Name:               pause-609507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-609507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=pause-609507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_48_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:48:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-609507
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:53:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    pause-609507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 16805f4643924281a7d376302d49bd1e
	  System UUID:                16805f46-4392-4281-a7d3-76302d49bd1e
	  Boot ID:                    c3cec0dd-95a4-4e58-b1a0-71ec99d4e6ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jjdnr                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m42s
	  kube-system                 etcd-pause-609507                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         4m47s
	  kube-system                 kube-apiserver-pause-609507             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-pause-609507    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-proxy-djqjf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-pause-609507             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 24s                    kube-proxy       
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s                  kubelet          Node pause-609507 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	  Normal  NodeHasSufficientMemory  38s (x6 over 3m34s)    kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x6 over 3m34s)    kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x6 over 3m34s)    kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                    node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	
	
	==> dmesg <==
	[  +0.064277] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063977] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.245186] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143717] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.339936] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +4.042152] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +3.928882] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.064208] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990603] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.122671] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.496638] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.288036] systemd-fstab-generator[1400]: Ignoring "noauto" option for root device
	[ +11.387294] kauditd_printk_skb: 97 callbacks suppressed
	[Sep14 00:49] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +0.141829] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.165517] systemd-fstab-generator[2250]: Ignoring "noauto" option for root device
	[  +0.145436] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.272556] systemd-fstab-generator[2290]: Ignoring "noauto" option for root device
	[  +0.948078] systemd-fstab-generator[2505]: Ignoring "noauto" option for root device
	[  +4.480849] kauditd_printk_skb: 187 callbacks suppressed
	[  +9.496509] systemd-fstab-generator[3169]: Ignoring "noauto" option for root device
	[Sep14 00:51] kauditd_printk_skb: 20 callbacks suppressed
	[Sep14 00:52] kauditd_printk_skb: 5 callbacks suppressed
	[ +58.420360] kauditd_printk_skb: 7 callbacks suppressed
	[Sep14 00:53] systemd-fstab-generator[4080]: Ignoring "noauto" option for root device
	
	
	==> etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] <==
	{"level":"info","ts":"2024-09-14T00:49:30.609266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:49:30.609325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgPreVoteResp from b2f9167931180af7 at term 2"}
	{"level":"info","ts":"2024-09-14T00:49:30.609353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgVoteResp from b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2f9167931180af7 elected leader b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.614726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:49:30.614681Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2f9167931180af7","local-member-attributes":"{Name:pause-609507 ClientURLs:[https://192.168.39.112:2379]}","request-path":"/0/members/b2f9167931180af7/attributes","cluster-id":"694778b4375dcf94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:49:30.615822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:49:30.615943Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:49:30.616179Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:49:30.616202Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:49:30.616687Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:49:30.617525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:49:30.617528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.112:2379"}
	{"level":"info","ts":"2024-09-14T00:49:39.075200Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T00:49:39.075285Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-609507","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.112:2380"],"advertise-client-urls":["https://192.168.39.112:2379"]}
	{"level":"warn","ts":"2024-09-14T00:49:39.075388Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.075476Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.094022Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.112:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.094149Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.112:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:49:39.094244Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2f9167931180af7","current-leader-member-id":"b2f9167931180af7"}
	{"level":"info","ts":"2024-09-14T00:49:39.101781Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-09-14T00:49:39.101895Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-09-14T00:49:39.101906Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-609507","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.112:2380"],"advertise-client-urls":["https://192.168.39.112:2379"]}
	
	
	==> etcd [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73] <==
	{"level":"info","ts":"2024-09-14T00:51:45.300477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgPreVoteResp from b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became candidate at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgVoteResp from b2f9167931180af7 at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became leader at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2f9167931180af7 elected leader b2f9167931180af7 at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.306693Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2f9167931180af7","local-member-attributes":"{Name:pause-609507 ClientURLs:[https://192.168.39.112:2379]}","request-path":"/0/members/b2f9167931180af7/attributes","cluster-id":"694778b4375dcf94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:51:45.306783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:51:45.307141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:51:45.307177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:51:45.307199Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:51:45.308501Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:51:45.309798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:51:45.308499Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:51:45.311114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.112:2379"}
	{"level":"warn","ts":"2024-09-14T00:52:01.749074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.592178ms","expected-duration":"100ms","prefix":"","request":"header:<ID:790260711076673286 > lease_revoke:<id:0af791ee019db111>","response":"size:28"}
	{"level":"info","ts":"2024-09-14T00:52:59.299300Z","caller":"traceutil/trace.go:171","msg":"trace[1998006713] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"272.480183ms","start":"2024-09-14T00:52:59.026790Z","end":"2024-09-14T00:52:59.299271Z","steps":["trace[1998006713] 'process raft request'  (duration: 272.35199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:52:59.850297Z","caller":"traceutil/trace.go:171","msg":"trace[1890527220] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"117.482678ms","start":"2024-09-14T00:52:59.732793Z","end":"2024-09-14T00:52:59.850276Z","steps":["trace[1890527220] 'process raft request'  (duration: 117.365011ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:53:00.173674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.93685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-controller-manager-pause-609507.17f4f60f921a2fb4\" ","response":"range_response_count:1 size:807"}
	{"level":"info","ts":"2024-09-14T00:53:00.173855Z","caller":"traceutil/trace.go:171","msg":"trace[810928997] range","detail":"{range_begin:/registry/events/kube-system/kube-controller-manager-pause-609507.17f4f60f921a2fb4; range_end:; response_count:1; response_revision:544; }","duration":"101.166015ms","start":"2024-09-14T00:53:00.072670Z","end":"2024-09-14T00:53:00.173836Z","steps":["trace[810928997] 'range keys from in-memory index tree'  (duration: 100.773989ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:53:00.363826Z","caller":"traceutil/trace.go:171","msg":"trace[1431055995] linearizableReadLoop","detail":"{readStateIndex:585; appliedIndex:584; }","duration":"142.851377ms","start":"2024-09-14T00:53:00.220958Z","end":"2024-09-14T00:53:00.363809Z","steps":["trace[1431055995] 'read index received'  (duration: 142.683698ms)","trace[1431055995] 'applied index is now lower than readState.Index'  (duration: 167.163µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:53:00.364074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.098425ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:53:00.364130Z","caller":"traceutil/trace.go:171","msg":"trace[62468103] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:545; }","duration":"143.182159ms","start":"2024-09-14T00:53:00.220937Z","end":"2024-09-14T00:53:00.364119Z","steps":["trace[62468103] 'agreement among raft nodes before linearized reading'  (duration: 143.056423ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:53:00.364713Z","caller":"traceutil/trace.go:171","msg":"trace[279047246] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"188.472695ms","start":"2024-09-14T00:53:00.176223Z","end":"2024-09-14T00:53:00.364696Z","steps":["trace[279047246] 'process raft request'  (duration: 187.47662ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:53:16 up 5 min,  0 users,  load average: 0.09, 0.25, 0.15
	Linux pause-609507 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179] <==
	I0914 00:48:28.779685       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 00:48:29.034820       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:48:29.622516       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:48:29.655684       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 00:48:29.681591       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:48:34.487095       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0914 00:48:34.587308       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0914 00:49:20.046711       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0914 00:49:20.062850       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0914 00:49:20.066210       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066489       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066627       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066692       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066692       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.067873       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068269       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068359       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068785       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.069506       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.069790       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071214       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071532       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071866       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071954       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.072450       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9] <==
	E0914 00:52:12.611670       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.649806ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-609507" result=null
	E0914 00:52:16.199268       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.546µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:16.203301       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.38µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:29.607065       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:29.609051       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.610253       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.611619       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.613090       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.469222ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-609507" result=null
	E0914 00:52:31.985858       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.29µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:35.175779       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.3µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:42.201208       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:42.202523       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.203692       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.204847       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.206097       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.019959ms" method="GET" path="/api/v1/nodes/pause-609507" result=null
	E0914 00:52:44.058743       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:44.060466       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.061649       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.062809       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.064077       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.101763ms" method="GET" path="/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication" result=null
	I0914 00:52:54.014078       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:52:54.045030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:52:54.124881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:53:01.255412       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:53:01.265918       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969] <==
	I0914 00:52:16.621743       1 serving.go:386] Generated self-signed cert in-memory
	I0914 00:52:17.070221       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 00:52:17.070340       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:17.072218       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 00:52:17.072370       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 00:52:17.072870       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 00:52:17.073465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 00:52:31.089325       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2] <==
	I0914 00:53:01.865172       1 shared_informer.go:320] Caches are synced for TTL
	I0914 00:53:01.867393       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0914 00:53:01.869758       1 shared_informer.go:320] Caches are synced for GC
	I0914 00:53:01.871024       1 shared_informer.go:320] Caches are synced for namespace
	I0914 00:53:01.872279       1 shared_informer.go:320] Caches are synced for daemon sets
	I0914 00:53:01.874748       1 shared_informer.go:320] Caches are synced for deployment
	I0914 00:53:01.904864       1 shared_informer.go:320] Caches are synced for service account
	I0914 00:53:01.978651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="159.228042ms"
	I0914 00:53:01.978736       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0914 00:53:01.979539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.696µs"
	I0914 00:53:01.981684       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:53:01.998850       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:53:02.013259       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0914 00:53:02.013363       1 shared_informer.go:320] Caches are synced for endpoint
	I0914 00:53:02.015823       1 shared_informer.go:320] Caches are synced for disruption
	I0914 00:53:02.023741       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0914 00:53:02.065190       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0914 00:53:02.115094       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 00:53:02.115210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 00:53:02.115860       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 00:53:02.116122       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 00:53:02.120091       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0914 00:53:02.522754       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 00:53:02.531326       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 00:53:02.531437       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:52:52.028356       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:52:52.038800       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.112"]
	E0914 00:52:52.038890       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:52:52.083031       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:52:52.083106       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:52:52.083153       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:52:52.086995       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:52:52.087593       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:52:52.087630       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:52.089634       1 config.go:199] "Starting service config controller"
	I0914 00:52:52.089705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:52:52.089766       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:52:52.089789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:52:52.092308       1 config.go:328] "Starting node config controller"
	I0914 00:52:52.098186       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:52:52.098198       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:52:52.190908       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:52:52.190962       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:48:35.518640       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:48:35.568954       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.112"]
	E0914 00:48:35.569498       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:48:35.672805       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:48:35.672847       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:48:35.672882       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:48:35.752465       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:48:35.754635       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:48:35.754671       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:48:35.767833       1 config.go:199] "Starting service config controller"
	I0914 00:48:35.769335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:48:35.769649       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:48:35.769694       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:48:35.776907       1 config.go:328] "Starting node config controller"
	I0914 00:48:35.776935       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:48:35.970470       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:48:35.977071       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:48:35.976574       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1] <==
	W0914 00:49:34.113071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 00:49:34.113101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:49:34.113203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:49:34.113355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:49:34.113460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:49:34.113612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 00:49:34.113759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:49:34.113869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:49:34.114079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.114139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:49:34.114170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.115769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:49:34.115863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.116459       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:49:34.121645       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 00:49:38.384639       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 00:49:39.213637       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0914 00:49:39.213813       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] <==
	I0914 00:51:43.369515       1 serving.go:386] Generated self-signed cert in-memory
	W0914 00:52:44.060163       1 authentication.go:370] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	W0914 00:52:44.060617       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 00:52:44.060682       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 00:52:44.077518       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 00:52:44.077673       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:44.080381       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:52:44.080464       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:52:44.080636       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 00:52:44.080761       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 00:52:44.180706       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:52:32 pause-609507 kubelet[3176]: I0914 00:52:32.798781    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:52:32 pause-609507 kubelet[3176]: E0914 00:52:32.798954    3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-609507_kube-system(8d87185065e5c5b732f996180cc6b281)\"" pod="kube-system/kube-controller-manager-pause-609507" podUID="8d87185065e5c5b732f996180cc6b281"
	Sep 14 00:52:35 pause-609507 kubelet[3176]: E0914 00:52:35.176616    3176 kubelet_node_status.go:95] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="pause-609507"
	Sep 14 00:52:38 pause-609507 kubelet[3176]: I0914 00:52:38.379461    3176 kubelet_node_status.go:72] "Attempting to register node" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.028933    3176 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:52:42 pause-609507 kubelet[3176]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.101812    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275162101281245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.101930    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275162101281245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442245    3176 kubelet_node_status.go:111] "Node was previously registered" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442473    3176 kubelet_node_status.go:75] "Successfully registered node" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442612    3176 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.443687    3176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 00:52:46 pause-609507 kubelet[3176]: I0914 00:52:46.015737    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:52:46 pause-609507 kubelet[3176]: E0914 00:52:46.016345    3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-609507_kube-system(8d87185065e5c5b732f996180cc6b281)\"" pod="kube-system/kube-controller-manager-pause-609507" podUID="8d87185065e5c5b732f996180cc6b281"
	Sep 14 00:52:51 pause-609507 kubelet[3176]: I0914 00:52:51.616624    3176 scope.go:117] "RemoveContainer" containerID="6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	Sep 14 00:52:52 pause-609507 kubelet[3176]: E0914 00:52:52.110852    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275172107933132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:52 pause-609507 kubelet[3176]: E0914 00:52:52.111024    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275172107933132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:59 pause-609507 kubelet[3176]: I0914 00:52:59.015267    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:53:02 pause-609507 kubelet[3176]: E0914 00:53:02.114640    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275182114242364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:02 pause-609507 kubelet[3176]: E0914 00:53:02.114673    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275182114242364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:12 pause-609507 kubelet[3176]: E0914 00:53:12.117125    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275192116637626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:12 pause-609507 kubelet[3176]: E0914 00:53:12.117166    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275192116637626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-609507 -n pause-609507
helpers_test.go:261: (dbg) Run:  kubectl --context pause-609507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-609507 -n pause-609507
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-609507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-609507 logs -n 25: (2.219525466s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449 sudo cat                | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-670449                         | enable-default-cni-670449 | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC | 14 Sep 24 00:52 UTC |
	| start   | -p old-k8s-version-431084                            | old-k8s-version-431084    | jenkins | v1.34.0 | 14 Sep 24 00:52 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-670449 pgrep -a                           | flannel-670449            | jenkins | v1.34.0 | 14 Sep 24 00:53 UTC | 14 Sep 24 00:53 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:52:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:52:35.724587   66801 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:52:35.724870   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.724882   66801 out.go:358] Setting ErrFile to fd 2...
	I0914 00:52:35.724887   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.725072   66801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:52:35.725683   66801 out.go:352] Setting JSON to false
	I0914 00:52:35.726845   66801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5702,"bootTime":1726269454,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:52:35.726957   66801 start.go:139] virtualization: kvm guest
	I0914 00:52:35.729922   66801 out.go:177] * [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:52:35.731826   66801 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:52:35.731857   66801 notify.go:220] Checking for updates...
	I0914 00:52:35.735497   66801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:52:35.737400   66801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:52:35.738944   66801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:35.740627   66801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:52:35.742223   66801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:52:35.744593   66801 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744763   66801 config.go:182] Loaded profile config "flannel-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744950   66801 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.745082   66801 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:52:35.792655   66801 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:52:35.794325   66801 start.go:297] selected driver: kvm2
	I0914 00:52:35.794345   66801 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:52:35.794357   66801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:52:35.795353   66801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.795460   66801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:52:35.812779   66801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:52:35.812843   66801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:52:35.813119   66801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:52:35.813151   66801 cni.go:84] Creating CNI manager for ""
	I0914 00:52:35.813197   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:52:35.813206   66801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 00:52:35.813298   66801 start.go:340] cluster config:
	{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:52:35.813422   66801 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.815527   66801 out.go:177] * Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	I0914 00:52:36.030521   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:36.031056   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:36.031083   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:36.031009   65551 retry.go:31] will retry after 2.413971511s: waiting for machine to come up
	I0914 00:52:38.446532   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:38.447295   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:38.447328   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:38.447229   65551 retry.go:31] will retry after 3.186000225s: waiting for machine to come up
	I0914 00:52:36.093385   63660 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002312016s
	I0914 00:52:36.093493   63660 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:52:35.816967   66801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:52:35.817022   66801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 00:52:35.817033   66801 cache.go:56] Caching tarball of preloaded images
	I0914 00:52:35.817165   66801 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:52:35.817181   66801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 00:52:35.817348   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:52:35.817378   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json: {Name:mk66cd4353dae42258dd8e2fe6f383f65dc09589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:35.817576   66801 start.go:360] acquireMachinesLock for old-k8s-version-431084: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:52:41.093265   63660 kubeadm.go:310] [api-check] The API server is healthy after 5.002550747s
	I0914 00:52:41.106932   63660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:52:41.134962   63660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:52:41.175033   63660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:52:41.175314   63660 kubeadm.go:310] [mark-control-plane] Marking the node flannel-670449 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:52:41.192400   63660 kubeadm.go:310] [bootstrap-token] Using token: m21b40.gixyoiwl4zzeo6il
	I0914 00:52:41.194333   63660 out.go:235]   - Configuring RBAC rules ...
	I0914 00:52:41.194533   63660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:52:41.199774   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:52:41.215531   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:52:41.223597   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:52:41.228568   63660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:52:41.234464   63660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:52:41.500134   63660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:52:41.925932   63660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:52:42.502870   63660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:52:42.502897   63660 kubeadm.go:310] 
	I0914 00:52:42.502977   63660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:52:42.502989   63660 kubeadm.go:310] 
	I0914 00:52:42.503083   63660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:52:42.503091   63660 kubeadm.go:310] 
	I0914 00:52:42.503118   63660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:52:42.503189   63660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:52:42.503278   63660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:52:42.503287   63660 kubeadm.go:310] 
	I0914 00:52:42.503366   63660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:52:42.503376   63660 kubeadm.go:310] 
	I0914 00:52:42.503473   63660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:52:42.503496   63660 kubeadm.go:310] 
	I0914 00:52:42.503573   63660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:52:42.503674   63660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:52:42.503770   63660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:52:42.503779   63660 kubeadm.go:310] 
	I0914 00:52:42.503906   63660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:52:42.504038   63660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:52:42.504048   63660 kubeadm.go:310] 
	I0914 00:52:42.504169   63660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m21b40.gixyoiwl4zzeo6il \
	I0914 00:52:42.504305   63660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 00:52:42.504336   63660 kubeadm.go:310] 	--control-plane 
	I0914 00:52:42.504351   63660 kubeadm.go:310] 
	I0914 00:52:42.504464   63660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:52:42.504475   63660 kubeadm.go:310] 
	I0914 00:52:42.504582   63660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m21b40.gixyoiwl4zzeo6il \
	I0914 00:52:42.504708   63660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 00:52:42.505158   63660 kubeadm.go:310] W0914 00:52:31.439490     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:42.505425   63660 kubeadm.go:310] W0914 00:52:31.440579     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:42.505558   63660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:52:42.505583   63660 cni.go:84] Creating CNI manager for "flannel"
	I0914 00:52:42.507068   63660 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0914 00:52:42.581213   57689 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (52.945245829s)
	I0914 00:52:42.584252   57689 logs.go:123] Gathering logs for etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] ...
	I0914 00:52:42.584277   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:42.628197   57689 logs.go:123] Gathering logs for kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] ...
	I0914 00:52:42.628232   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:41.634470   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:41.634910   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:41.634933   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:41.634875   65551 retry.go:31] will retry after 4.116962653s: waiting for machine to come up
	I0914 00:52:42.508050   63660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:52:42.513561   63660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:52:42.513577   63660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0914 00:52:42.534460   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:52:42.951907   63660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:52:42.952030   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:42.952068   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-670449 minikube.k8s.io/updated_at=2024_09_14T00_52_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=flannel-670449 minikube.k8s.io/primary=true
	I0914 00:52:43.138454   63660 ops.go:34] apiserver oom_adj: -16
	I0914 00:52:43.138611   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:43.639447   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:44.138955   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:44.638731   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:45.138696   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:45.639231   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.138783   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.639375   63660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:52:46.800566   63660 kubeadm.go:1113] duration metric: took 3.848587036s to wait for elevateKubeSystemPrivileges
	I0914 00:52:46.800608   63660 kubeadm.go:394] duration metric: took 15.544937331s to StartCluster
	I0914 00:52:46.800632   63660 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:46.800721   63660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:52:46.802200   63660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:46.802504   63660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:52:46.802534   63660 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:52:46.802600   63660 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:52:46.802752   63660 addons.go:69] Setting default-storageclass=true in profile "flannel-670449"
	I0914 00:52:46.802780   63660 addons.go:69] Setting storage-provisioner=true in profile "flannel-670449"
	I0914 00:52:46.802790   63660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-670449"
	I0914 00:52:46.802836   63660 addons.go:234] Setting addon storage-provisioner=true in "flannel-670449"
	I0914 00:52:46.802871   63660 host.go:66] Checking if "flannel-670449" exists ...
	I0914 00:52:46.802784   63660 config.go:182] Loaded profile config "flannel-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:46.803394   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.803445   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.803451   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.803493   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.804222   63660 out.go:177] * Verifying Kubernetes components...
	I0914 00:52:46.806107   63660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:46.819765   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0914 00:52:46.819875   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0914 00:52:46.820333   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.820405   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.820896   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.820900   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.820914   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.820918   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.821281   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.821318   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.821502   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.821839   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.821880   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.825594   63660 addons.go:234] Setting addon default-storageclass=true in "flannel-670449"
	I0914 00:52:46.825643   63660 host.go:66] Checking if "flannel-670449" exists ...
	I0914 00:52:46.826029   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.826082   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.837556   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0914 00:52:46.838070   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.838695   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.838726   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.839045   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.839228   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.841167   63660 main.go:141] libmachine: (flannel-670449) Calling .DriverName
	I0914 00:52:46.841408   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33385
	I0914 00:52:46.841937   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.842477   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.842498   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.842897   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.843189   63660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:52:46.843415   63660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:46.843451   63660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:46.844758   63660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:52:46.844780   63660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:52:46.844809   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHHostname
	I0914 00:52:46.847890   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.848409   63660 main.go:141] libmachine: (flannel-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:66:54", ip: ""} in network mk-flannel-670449: {Iface:virbr4 ExpiryTime:2024-09-14 01:52:15 +0000 UTC Type:0 Mac:52:54:00:15:66:54 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:flannel-670449 Clientid:01:52:54:00:15:66:54}
	I0914 00:52:46.848436   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined IP address 192.168.72.151 and MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.848612   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHPort
	I0914 00:52:46.848796   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHKeyPath
	I0914 00:52:46.848955   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHUsername
	I0914 00:52:46.849128   63660 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/flannel-670449/id_rsa Username:docker}
	I0914 00:52:46.859731   63660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0914 00:52:46.860330   63660 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:46.860803   63660 main.go:141] libmachine: Using API Version  1
	I0914 00:52:46.860825   63660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:46.861151   63660 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:46.861346   63660 main.go:141] libmachine: (flannel-670449) Calling .GetState
	I0914 00:52:46.863059   63660 main.go:141] libmachine: (flannel-670449) Calling .DriverName
	I0914 00:52:46.863289   63660 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:52:46.863303   63660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:52:46.863317   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHHostname
	I0914 00:52:46.866859   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.867369   63660 main.go:141] libmachine: (flannel-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:66:54", ip: ""} in network mk-flannel-670449: {Iface:virbr4 ExpiryTime:2024-09-14 01:52:15 +0000 UTC Type:0 Mac:52:54:00:15:66:54 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:flannel-670449 Clientid:01:52:54:00:15:66:54}
	I0914 00:52:46.867390   63660 main.go:141] libmachine: (flannel-670449) DBG | domain flannel-670449 has defined IP address 192.168.72.151 and MAC address 52:54:00:15:66:54 in network mk-flannel-670449
	I0914 00:52:46.867551   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHPort
	I0914 00:52:46.867717   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHKeyPath
	I0914 00:52:46.867871   63660 main.go:141] libmachine: (flannel-670449) Calling .GetSSHUsername
	I0914 00:52:46.868000   63660 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/flannel-670449/id_rsa Username:docker}
	I0914 00:52:47.129773   63660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:52:47.129816   63660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:52:47.208206   63660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:52:47.314622   63660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:52:47.619910   63660 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 00:52:47.620848   63660 node_ready.go:35] waiting up to 15m0s for node "flannel-670449" to be "Ready" ...
	I0914 00:52:47.901083   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901110   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901150   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901171   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901548   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.901575   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.901598   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.901607   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.901615   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901621   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901785   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.901804   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.901815   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.901823   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.901980   63660 main.go:141] libmachine: (flannel-670449) DBG | Closing plugin on server side
	I0914 00:52:47.902028   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.902063   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.902192   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.902204   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.912084   63660 main.go:141] libmachine: Making call to close driver server
	I0914 00:52:47.912108   63660 main.go:141] libmachine: (flannel-670449) Calling .Close
	I0914 00:52:47.912390   63660 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:52:47.912409   63660 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:52:47.914010   63660 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 00:52:45.168586   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:52:47.174812   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 00:52:47.174848   57689 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 00:52:47.174881   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:52:47.174949   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:52:47.222404   57689 cri.go:89] found id: "ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9"
	I0914 00:52:47.222430   57689 cri.go:89] found id: "429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179"
	I0914 00:52:47.222436   57689 cri.go:89] found id: ""
	I0914 00:52:47.222445   57689 logs.go:276] 2 containers: [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9 429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179]
	I0914 00:52:47.222512   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.228116   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.232504   57689 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:52:47.232564   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:52:47.281624   57689 cri.go:89] found id: "8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73"
	I0914 00:52:47.281651   57689 cri.go:89] found id: "174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:47.281658   57689 cri.go:89] found id: ""
	I0914 00:52:47.281668   57689 logs.go:276] 2 containers: [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688]
	I0914 00:52:47.281727   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.285853   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.290892   57689 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:52:47.290970   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:52:47.331869   57689 cri.go:89] found id: "6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	I0914 00:52:47.331895   57689 cri.go:89] found id: ""
	I0914 00:52:47.331905   57689 logs.go:276] 1 containers: [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7]
	I0914 00:52:47.331968   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.335953   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:52:47.336028   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:52:47.377231   57689 cri.go:89] found id: "9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:47.377256   57689 cri.go:89] found id: "29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1"
	I0914 00:52:47.377263   57689 cri.go:89] found id: ""
	I0914 00:52:47.377272   57689 logs.go:276] 2 containers: [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac 29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1]
	I0914 00:52:47.377332   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.381349   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.385995   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:52:47.386065   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:52:47.428313   57689 cri.go:89] found id: "eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7"
	I0914 00:52:47.428341   57689 cri.go:89] found id: ""
	I0914 00:52:47.428350   57689 logs.go:276] 1 containers: [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7]
	I0914 00:52:47.428410   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.432320   57689 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:52:47.432393   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:52:47.466850   57689 cri.go:89] found id: "0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	I0914 00:52:47.466876   57689 cri.go:89] found id: ""
	I0914 00:52:47.466886   57689 logs.go:276] 1 containers: [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969]
	I0914 00:52:47.466948   57689 ssh_runner.go:195] Run: which crictl
	I0914 00:52:47.470993   57689 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:52:47.471075   57689 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:52:47.505826   57689 cri.go:89] found id: ""
	I0914 00:52:47.505858   57689 logs.go:276] 0 containers: []
	W0914 00:52:47.505869   57689 logs.go:278] No container was found matching "kindnet"
	I0914 00:52:47.505887   57689 logs.go:123] Gathering logs for etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] ...
	I0914 00:52:47.505901   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688"
	I0914 00:52:47.561783   57689 logs.go:123] Gathering logs for coredns [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7] ...
	I0914 00:52:47.561837   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	I0914 00:52:47.607123   57689 logs.go:123] Gathering logs for container status ...
	I0914 00:52:47.607162   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:52:47.654787   57689 logs.go:123] Gathering logs for kube-apiserver [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9] ...
	I0914 00:52:47.654834   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9"
	I0914 00:52:47.758213   57689 logs.go:123] Gathering logs for kube-apiserver [429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179] ...
	I0914 00:52:47.758259   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179"
	I0914 00:52:47.809207   57689 logs.go:123] Gathering logs for kubelet ...
	I0914 00:52:47.809253   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 00:52:47.922544   57689 logs.go:123] Gathering logs for dmesg ...
	I0914 00:52:47.922580   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:52:47.938532   57689 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:52:47.938565   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:52:48.058868   57689 logs.go:123] Gathering logs for etcd [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73] ...
	I0914 00:52:48.058911   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73"
	I0914 00:52:48.105331   57689 logs.go:123] Gathering logs for kube-proxy [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7] ...
	I0914 00:52:48.105364   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7"
	I0914 00:52:48.142072   57689 logs.go:123] Gathering logs for kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] ...
	I0914 00:52:48.142098   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac"
	I0914 00:52:48.176207   57689 logs.go:123] Gathering logs for kube-scheduler [29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1] ...
	I0914 00:52:48.176235   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1"
	I0914 00:52:48.230561   57689 logs.go:123] Gathering logs for kube-controller-manager [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969] ...
	I0914 00:52:48.230598   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	I0914 00:52:48.269943   57689 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:52:48.269982   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:52:45.753768   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:45.754305   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find current IP address of domain bridge-670449 in network mk-bridge-670449
	I0914 00:52:45.754334   65519 main.go:141] libmachine: (bridge-670449) DBG | I0914 00:52:45.754233   65551 retry.go:31] will retry after 3.696197004s: waiting for machine to come up
	I0914 00:52:49.453223   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.453699   65519 main.go:141] libmachine: (bridge-670449) Found IP for machine: 192.168.50.31
	I0914 00:52:49.453727   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has current primary IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.453733   65519 main.go:141] libmachine: (bridge-670449) Reserving static IP address...
	I0914 00:52:49.454144   65519 main.go:141] libmachine: (bridge-670449) DBG | unable to find host DHCP lease matching {name: "bridge-670449", mac: "52:54:00:f0:d3:6e", ip: "192.168.50.31"} in network mk-bridge-670449
	I0914 00:52:49.541442   65519 main.go:141] libmachine: (bridge-670449) Reserved static IP address: 192.168.50.31
	I0914 00:52:49.541471   65519 main.go:141] libmachine: (bridge-670449) Waiting for SSH to be available...
	I0914 00:52:49.541480   65519 main.go:141] libmachine: (bridge-670449) DBG | Getting to WaitForSSH function...
	I0914 00:52:49.544901   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.545335   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.545357   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.545532   65519 main.go:141] libmachine: (bridge-670449) DBG | Using SSH client type: external
	I0914 00:52:49.545564   65519 main.go:141] libmachine: (bridge-670449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa (-rw-------)
	I0914 00:52:49.545590   65519 main.go:141] libmachine: (bridge-670449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:52:49.545599   65519 main.go:141] libmachine: (bridge-670449) DBG | About to run SSH command:
	I0914 00:52:49.545609   65519 main.go:141] libmachine: (bridge-670449) DBG | exit 0
	I0914 00:52:49.671919   65519 main.go:141] libmachine: (bridge-670449) DBG | SSH cmd err, output: <nil>: 
	I0914 00:52:49.672185   65519 main.go:141] libmachine: (bridge-670449) KVM machine creation complete!
	I0914 00:52:49.672598   65519 main.go:141] libmachine: (bridge-670449) Calling .GetConfigRaw
	I0914 00:52:49.673192   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:49.673397   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:49.673561   65519 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 00:52:49.673578   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:52:49.675295   65519 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 00:52:49.675312   65519 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 00:52:49.675321   65519 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 00:52:49.675330   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.677975   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.678364   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.678407   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.678557   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.678721   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.678862   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.679017   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.679154   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.679404   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.679423   65519 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 00:52:49.783363   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:52:49.783388   65519 main.go:141] libmachine: Detecting the provisioner...
	I0914 00:52:49.783395   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.787254   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.787742   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.787803   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.787987   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.788200   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.788466   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.788669   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.788874   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.789051   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.789067   65519 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 00:52:49.892912   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 00:52:49.893014   65519 main.go:141] libmachine: found compatible host: buildroot
	I0914 00:52:49.893036   65519 main.go:141] libmachine: Provisioning with buildroot...
	I0914 00:52:49.893048   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:49.893313   65519 buildroot.go:166] provisioning hostname "bridge-670449"
	I0914 00:52:49.893337   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:49.893533   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:49.895960   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.896375   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:49.896404   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:49.896565   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:49.896732   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.896878   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:49.896979   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:49.897089   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:49.897284   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:49.897298   65519 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-670449 && echo "bridge-670449" | sudo tee /etc/hostname
	I0914 00:52:50.014221   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-670449
	
	I0914 00:52:50.014267   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.017328   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.017692   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.017742   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.017890   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.018078   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.018239   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.018361   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.018531   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:50.018776   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:50.018802   65519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-670449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-670449/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-670449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:52:50.128628   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:52:50.128659   65519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:52:50.128686   65519 buildroot.go:174] setting up certificates
	I0914 00:52:50.128701   65519 provision.go:84] configureAuth start
	I0914 00:52:50.128710   65519 main.go:141] libmachine: (bridge-670449) Calling .GetMachineName
	I0914 00:52:50.129001   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:50.132177   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.132627   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.132656   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.132773   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.134959   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.135238   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.135263   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.135429   65519 provision.go:143] copyHostCerts
	I0914 00:52:50.135486   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:52:50.135498   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:52:50.135575   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:52:50.135697   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:52:50.135709   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:52:50.135738   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:52:50.135822   65519 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:52:50.135833   65519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:52:50.135873   65519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:52:50.135959   65519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.bridge-670449 san=[127.0.0.1 192.168.50.31 bridge-670449 localhost minikube]
	I0914 00:52:47.915042   63660 addons.go:510] duration metric: took 1.112446056s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0914 00:52:48.123994   63660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-670449" context rescaled to 1 replicas
	I0914 00:52:49.626128   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:51.147099   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:52:51.153123   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0914 00:52:51.159343   57689 api_server.go:141] control plane version: v1.31.1
	I0914 00:52:51.159368   57689 api_server.go:131] duration metric: took 3m9.119642245s to wait for apiserver health ...
	I0914 00:52:51.159376   57689 cni.go:84] Creating CNI manager for ""
	I0914 00:52:51.159382   57689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:52:51.161233   57689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 00:52:51.328508   66801 start.go:364] duration metric: took 15.510868976s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 00:52:51.328574   66801 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:52:51.328690   66801 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 00:52:50.703747   65519 provision.go:177] copyRemoteCerts
	I0914 00:52:50.703845   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:52:50.703886   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.706881   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.707256   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.707283   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.707519   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.707732   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.707909   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.708051   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:50.790117   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:52:50.813667   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:52:50.837862   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:52:50.863630   65519 provision.go:87] duration metric: took 734.9175ms to configureAuth
	I0914 00:52:50.863656   65519 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:52:50.863848   65519 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:50.863928   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:50.867432   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.867857   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:50.867878   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:50.868086   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:50.868289   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.868419   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:50.868590   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:50.868786   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:50.868943   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:50.868956   65519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:52:51.086308   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:52:51.086343   65519 main.go:141] libmachine: Checking connection to Docker...
	I0914 00:52:51.086352   65519 main.go:141] libmachine: (bridge-670449) Calling .GetURL
	I0914 00:52:51.087648   65519 main.go:141] libmachine: (bridge-670449) DBG | Using libvirt version 6000000
	I0914 00:52:51.089747   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.090114   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.090138   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.090316   65519 main.go:141] libmachine: Docker is up and running!
	I0914 00:52:51.090332   65519 main.go:141] libmachine: Reticulating splines...
	I0914 00:52:51.090339   65519 client.go:171] duration metric: took 25.636237149s to LocalClient.Create
	I0914 00:52:51.090360   65519 start.go:167] duration metric: took 25.636338798s to libmachine.API.Create "bridge-670449"
	I0914 00:52:51.090373   65519 start.go:293] postStartSetup for "bridge-670449" (driver="kvm2")
	I0914 00:52:51.090386   65519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:52:51.090403   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.090652   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:52:51.090679   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.092803   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.093149   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.093170   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.093254   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.093411   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.093553   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.093680   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.174415   65519 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:52:51.178609   65519 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:52:51.178631   65519 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:52:51.178691   65519 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:52:51.178774   65519 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:52:51.178886   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:52:51.188254   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:52:51.217990   65519 start.go:296] duration metric: took 127.600471ms for postStartSetup
	I0914 00:52:51.218068   65519 main.go:141] libmachine: (bridge-670449) Calling .GetConfigRaw
	I0914 00:52:51.218735   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:51.221529   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.221968   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.222028   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.222236   65519 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/config.json ...
	I0914 00:52:51.222424   65519 start.go:128] duration metric: took 25.791389492s to createHost
	I0914 00:52:51.222446   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.224953   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.225313   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.225342   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.225513   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.225684   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.225845   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.225960   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.226142   65519 main.go:141] libmachine: Using SSH client type: native
	I0914 00:52:51.226312   65519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0914 00:52:51.226322   65519 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:52:51.328261   65519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275171.280433236
	
	I0914 00:52:51.328293   65519 fix.go:216] guest clock: 1726275171.280433236
	I0914 00:52:51.328303   65519 fix.go:229] Guest: 2024-09-14 00:52:51.280433236 +0000 UTC Remote: 2024-09-14 00:52:51.222435144 +0000 UTC m=+25.918691991 (delta=57.998092ms)
	I0914 00:52:51.328371   65519 fix.go:200] guest clock delta is within tolerance: 57.998092ms
	I0914 00:52:51.328381   65519 start.go:83] releasing machines lock for "bridge-670449", held for 25.897467372s
	I0914 00:52:51.328433   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.328693   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:51.331858   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.332216   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.332245   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.332437   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.332988   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.333184   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:52:51.333272   65519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:52:51.333324   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.333425   65519 ssh_runner.go:195] Run: cat /version.json
	I0914 00:52:51.333450   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:52:51.336254   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336423   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336614   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.336639   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.336818   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:51.336841   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.336841   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:51.337064   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.337065   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:52:51.337310   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.337327   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:52:51.337450   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:52:51.337447   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.337621   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:52:51.425713   65519 ssh_runner.go:195] Run: systemctl --version
	I0914 00:52:51.459283   65519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:52:51.631590   65519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:52:51.637349   65519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:52:51.637437   65519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:52:51.654400   65519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:52:51.654427   65519 start.go:495] detecting cgroup driver to use...
	I0914 00:52:51.654497   65519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:52:51.675236   65519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:52:51.691902   65519 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:52:51.691977   65519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:52:51.708339   65519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:52:51.724145   65519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:52:51.880435   65519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:52:52.056731   65519 docker.go:233] disabling docker service ...
	I0914 00:52:52.056810   65519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:52:52.076254   65519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:52:52.094856   65519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:52:52.287880   65519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:52:52.455638   65519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:52:52.475364   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:52:52.496530   65519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:52:52.496600   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.507683   65519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:52:52.507763   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.520778   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.532702   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.544613   65519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:52:52.556678   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.566990   65519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.584466   65519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:52:52.594753   65519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:52:52.606063   65519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:52:52.606136   65519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:52:52.625334   65519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:52:52.638788   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:52.771946   65519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:52:52.889568   65519 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:52:52.889645   65519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:52:52.894380   65519 start.go:563] Will wait 60s for crictl version
	I0914 00:52:52.894450   65519 ssh_runner.go:195] Run: which crictl
	I0914 00:52:52.899436   65519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:52:52.953327   65519 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:52:52.953417   65519 ssh_runner.go:195] Run: crio --version
	I0914 00:52:52.991522   65519 ssh_runner.go:195] Run: crio --version
	I0914 00:52:53.025909   65519 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 00:52:51.162252   57689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 00:52:51.173126   57689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 00:52:51.193150   57689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:52:51.193239   57689 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 00:52:51.193257   57689 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 00:52:51.200614   57689 system_pods.go:59] 6 kube-system pods found
	I0914 00:52:51.200646   57689 system_pods.go:61] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 00:52:51.200654   57689 system_pods.go:61] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:52:51.200664   57689 system_pods.go:61] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 00:52:51.200673   57689 system_pods.go:61] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 00:52:51.200683   57689 system_pods.go:61] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 00:52:51.200692   57689 system_pods.go:61] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:52:51.200700   57689 system_pods.go:74] duration metric: took 7.525304ms to wait for pod list to return data ...
	I0914 00:52:51.200714   57689 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:52:51.204808   57689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:52:51.204849   57689 node_conditions.go:123] node cpu capacity is 2
	I0914 00:52:51.204864   57689 node_conditions.go:105] duration metric: took 4.145509ms to run NodePressure ...
	I0914 00:52:51.204885   57689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 00:52:53.027300   65519 main.go:141] libmachine: (bridge-670449) Calling .GetIP
	I0914 00:52:53.031622   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:53.033230   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:52:53.033271   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:52:53.033584   65519 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 00:52:53.038756   65519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:52:53.054324   65519 kubeadm.go:883] updating cluster {Name:bridge-670449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:52:53.054461   65519 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:52:53.054550   65519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:52:53.096127   65519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 00:52:53.096198   65519 ssh_runner.go:195] Run: which lz4
	I0914 00:52:53.101323   65519 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 00:52:53.106279   65519 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 00:52:53.106309   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 00:52:54.518632   65519 crio.go:462] duration metric: took 1.417350063s to copy over tarball
	I0914 00:52:54.518708   65519 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 00:52:52.132656   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:54.625439   63660 node_ready.go:53] node "flannel-670449" has status "Ready":"False"
	I0914 00:52:51.330804   66801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 00:52:51.331044   66801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:51.331098   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:51.348179   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0914 00:52:51.348690   66801 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:51.349350   66801 main.go:141] libmachine: Using API Version  1
	I0914 00:52:51.349375   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:51.349795   66801 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:51.349981   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:52:51.350148   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:52:51.350313   66801 start.go:159] libmachine.API.Create for "old-k8s-version-431084" (driver="kvm2")
	I0914 00:52:51.350346   66801 client.go:168] LocalClient.Create starting
	I0914 00:52:51.350381   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0914 00:52:51.350426   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350450   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350517   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0914 00:52:51.350545   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350565   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350590   66801 main.go:141] libmachine: Running pre-create checks...
	I0914 00:52:51.350607   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .PreCreateCheck
	I0914 00:52:51.350931   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:52:51.351327   66801 main.go:141] libmachine: Creating machine...
	I0914 00:52:51.351341   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .Create
	I0914 00:52:51.351507   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating KVM machine...
	I0914 00:52:51.352625   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found existing default KVM network
	I0914 00:52:51.353662   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.353505   66991 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:cd:68} reservation:<nil>}
	I0914 00:52:51.354641   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.354562   66991 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:96:2b} reservation:<nil>}
	I0914 00:52:51.355823   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.355718   66991 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380930}
	I0914 00:52:51.355857   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | created network xml: 
	I0914 00:52:51.355869   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | <network>
	I0914 00:52:51.355875   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <name>mk-old-k8s-version-431084</name>
	I0914 00:52:51.355891   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <dns enable='no'/>
	I0914 00:52:51.355895   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355902   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0914 00:52:51.355907   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     <dhcp>
	I0914 00:52:51.355916   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0914 00:52:51.355920   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     </dhcp>
	I0914 00:52:51.355925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   </ip>
	I0914 00:52:51.355930   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355937   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | </network>
	I0914 00:52:51.355944   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | 
	I0914 00:52:51.364017   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | trying to create private KVM network mk-old-k8s-version-431084 192.168.61.0/24...
	I0914 00:52:51.440773   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.440805   66801 main.go:141] libmachine: (old-k8s-version-431084) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0914 00:52:51.440818   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | private KVM network mk-old-k8s-version-431084 192.168.61.0/24 created
	I0914 00:52:51.440831   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.440744   66991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.440852   66801 main.go:141] libmachine: (old-k8s-version-431084) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0914 00:52:51.735078   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.734905   66991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa...
	I0914 00:52:51.899652   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899507   66991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk...
	I0914 00:52:51.899696   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing magic tar header
	I0914 00:52:51.899714   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing SSH key tar header
	I0914 00:52:51.899726   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899685   66991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.899876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084
	I0914 00:52:51.899901   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0914 00:52:51.899915   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 (perms=drwx------)
	I0914 00:52:51.899925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.899945   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0914 00:52:51.899957   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 00:52:51.899967   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins
	I0914 00:52:51.899975   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home
	I0914 00:52:51.899988   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0914 00:52:51.899998   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Skipping /home - not owner
	I0914 00:52:51.900017   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0914 00:52:51.900030   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0914 00:52:51.900057   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 00:52:51.900075   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 00:52:51.900089   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:51.901527   66801 main.go:141] libmachine: (old-k8s-version-431084) define libvirt domain using xml: 
	I0914 00:52:51.901552   66801 main.go:141] libmachine: (old-k8s-version-431084) <domain type='kvm'>
	I0914 00:52:51.901574   66801 main.go:141] libmachine: (old-k8s-version-431084)   <name>old-k8s-version-431084</name>
	I0914 00:52:51.901605   66801 main.go:141] libmachine: (old-k8s-version-431084)   <memory unit='MiB'>2200</memory>
	I0914 00:52:51.901613   66801 main.go:141] libmachine: (old-k8s-version-431084)   <vcpu>2</vcpu>
	I0914 00:52:51.901626   66801 main.go:141] libmachine: (old-k8s-version-431084)   <features>
	I0914 00:52:51.901633   66801 main.go:141] libmachine: (old-k8s-version-431084)     <acpi/>
	I0914 00:52:51.901644   66801 main.go:141] libmachine: (old-k8s-version-431084)     <apic/>
	I0914 00:52:51.901649   66801 main.go:141] libmachine: (old-k8s-version-431084)     <pae/>
	I0914 00:52:51.901656   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.901675   66801 main.go:141] libmachine: (old-k8s-version-431084)   </features>
	I0914 00:52:51.901691   66801 main.go:141] libmachine: (old-k8s-version-431084)   <cpu mode='host-passthrough'>
	I0914 00:52:51.901731   66801 main.go:141] libmachine: (old-k8s-version-431084)   
	I0914 00:52:51.901751   66801 main.go:141] libmachine: (old-k8s-version-431084)   </cpu>
	I0914 00:52:51.901761   66801 main.go:141] libmachine: (old-k8s-version-431084)   <os>
	I0914 00:52:51.901771   66801 main.go:141] libmachine: (old-k8s-version-431084)     <type>hvm</type>
	I0914 00:52:51.901780   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='cdrom'/>
	I0914 00:52:51.901786   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='hd'/>
	I0914 00:52:51.901795   66801 main.go:141] libmachine: (old-k8s-version-431084)     <bootmenu enable='no'/>
	I0914 00:52:51.901800   66801 main.go:141] libmachine: (old-k8s-version-431084)   </os>
	I0914 00:52:51.901809   66801 main.go:141] libmachine: (old-k8s-version-431084)   <devices>
	I0914 00:52:51.901816   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='cdrom'>
	I0914 00:52:51.901829   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/boot2docker.iso'/>
	I0914 00:52:51.901836   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hdc' bus='scsi'/>
	I0914 00:52:51.901857   66801 main.go:141] libmachine: (old-k8s-version-431084)       <readonly/>
	I0914 00:52:51.901867   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901878   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='disk'>
	I0914 00:52:51.901892   66801 main.go:141] libmachine: (old-k8s-version-431084)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 00:52:51.901909   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk'/>
	I0914 00:52:51.901920   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hda' bus='virtio'/>
	I0914 00:52:51.901931   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901943   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901957   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='mk-old-k8s-version-431084'/>
	I0914 00:52:51.901966   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.901975   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.901982   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901994   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='default'/>
	I0914 00:52:51.902000   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.902010   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.902021   66801 main.go:141] libmachine: (old-k8s-version-431084)     <serial type='pty'>
	I0914 00:52:51.902033   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target port='0'/>
	I0914 00:52:51.902040   66801 main.go:141] libmachine: (old-k8s-version-431084)     </serial>
	I0914 00:52:51.902052   66801 main.go:141] libmachine: (old-k8s-version-431084)     <console type='pty'>
	I0914 00:52:51.902062   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target type='serial' port='0'/>
	I0914 00:52:51.902072   66801 main.go:141] libmachine: (old-k8s-version-431084)     </console>
	I0914 00:52:51.902081   66801 main.go:141] libmachine: (old-k8s-version-431084)     <rng model='virtio'>
	I0914 00:52:51.902091   66801 main.go:141] libmachine: (old-k8s-version-431084)       <backend model='random'>/dev/random</backend>
	I0914 00:52:51.902100   66801 main.go:141] libmachine: (old-k8s-version-431084)     </rng>
	I0914 00:52:51.902107   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902116   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902133   66801 main.go:141] libmachine: (old-k8s-version-431084)   </devices>
	I0914 00:52:51.902144   66801 main.go:141] libmachine: (old-k8s-version-431084) </domain>
	I0914 00:52:51.902155   66801 main.go:141] libmachine: (old-k8s-version-431084) 
	I0914 00:52:51.906817   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:63:e1:fc in network default
	I0914 00:52:51.907735   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 00:52:51.907769   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:51.908690   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 00:52:51.909010   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 00:52:51.909570   66801 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 00:52:51.910517   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:53.472296   66801 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 00:52:53.473458   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.474119   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.474172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.474109   66991 retry.go:31] will retry after 277.653713ms: waiting for machine to come up
	I0914 00:52:53.753876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.755354   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.755382   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.755255   66991 retry.go:31] will retry after 372.557708ms: waiting for machine to come up
	I0914 00:52:54.129933   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.130551   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.130578   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.130504   66991 retry.go:31] will retry after 329.217104ms: waiting for machine to come up
	I0914 00:52:54.461115   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.461742   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.461767   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.461660   66991 retry.go:31] will retry after 534.468325ms: waiting for machine to come up
	I0914 00:52:54.998338   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.999189   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.999215   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.999096   66991 retry.go:31] will retry after 529.424126ms: waiting for machine to come up
	I0914 00:52:55.529670   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:55.530157   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:55.530193   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:55.530103   66991 retry.go:31] will retry after 701.848536ms: waiting for machine to come up
	I0914 00:52:56.925508   65519 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.406768902s)
	I0914 00:52:56.925543   65519 crio.go:469] duration metric: took 2.406883237s to extract the tarball
	I0914 00:52:56.925552   65519 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 00:52:56.975908   65519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:52:57.017587   65519 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:52:57.017610   65519 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:52:57.017620   65519 kubeadm.go:934] updating node { 192.168.50.31 8443 v1.31.1 crio true true} ...
	I0914 00:52:57.017729   65519 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-670449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0914 00:52:57.017808   65519 ssh_runner.go:195] Run: crio config
	I0914 00:52:57.064465   65519 cni.go:84] Creating CNI manager for "bridge"
	I0914 00:52:57.064490   65519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:52:57.064515   65519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-670449 NodeName:bridge-670449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:52:57.064701   65519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-670449"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:52:57.064773   65519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:52:57.075220   65519 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:52:57.075294   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:52:57.084561   65519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 00:52:57.101110   65519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:52:57.119668   65519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0914 00:52:57.136196   65519 ssh_runner.go:195] Run: grep 192.168.50.31	control-plane.minikube.internal$ /etc/hosts
	I0914 00:52:57.140005   65519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:52:57.152566   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:52:57.274730   65519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:52:57.291839   65519 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449 for IP: 192.168.50.31
	I0914 00:52:57.291859   65519 certs.go:194] generating shared ca certs ...
	I0914 00:52:57.291893   65519 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.292057   65519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:52:57.292117   65519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:52:57.292131   65519 certs.go:256] generating profile certs ...
	I0914 00:52:57.292214   65519 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key
	I0914 00:52:57.292275   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt with IP's: []
	I0914 00:52:57.470771   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt ...
	I0914 00:52:57.470801   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: {Name:mkfae33963ef664b8dafda0c7b72fc834cfda5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.470997   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key ...
	I0914 00:52:57.471012   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.key: {Name:mkd1d4d92b1a73a92a82f171f41ed38f2d046626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.471123   65519 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68
	I0914 00:52:57.471140   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.31]
	I0914 00:52:57.952463   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 ...
	I0914 00:52:57.952498   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68: {Name:mk96d20ef2a9061df72d43920f79694c959175bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.952696   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68 ...
	I0914 00:52:57.952713   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68: {Name:mk5fbc43cd82e9fc09c39819ffeaa17abab4487f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:57.952813   65519 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt.49cf1e68 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt
	I0914 00:52:57.952905   65519 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key.49cf1e68 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key
	I0914 00:52:57.952964   65519 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key
	I0914 00:52:57.952979   65519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt with IP's: []
	I0914 00:52:58.138893   65519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt ...
	I0914 00:52:58.138923   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt: {Name:mk2d161bad34687b448a56b19baf23e332cfbddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:58.139112   65519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key ...
	I0914 00:52:58.139132   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key: {Name:mk816a55f2fe7fde072a4a7bded931e7c853cfdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:58.139349   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:52:58.139393   65519 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:52:58.139408   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:52:58.139439   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:52:58.139468   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:52:58.139499   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:52:58.139552   65519 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:52:58.140143   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:52:58.180444   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:52:58.210855   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:52:58.236251   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:52:58.259982   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:52:58.286127   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 00:52:58.310403   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:52:58.334106   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:52:58.364875   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:52:58.395178   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:52:58.421481   65519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:52:58.445049   65519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:52:58.461391   65519 ssh_runner.go:195] Run: openssl version
	I0914 00:52:58.467369   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:52:58.478084   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.484262   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.484330   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:52:58.492517   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:52:58.507832   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:52:58.518372   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.523964   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.524024   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:52:58.529585   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:52:58.539746   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:52:58.550165   65519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.554708   65519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.554777   65519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:52:58.560404   65519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:52:58.571620   65519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:52:58.575471   65519 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:52:58.575522   65519 kubeadm.go:392] StartCluster: {Name:bridge-670449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:bridge-670449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:52:58.575599   65519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:52:58.575654   65519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:52:58.619220   65519 cri.go:89] found id: ""
	I0914 00:52:58.619295   65519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:52:58.629608   65519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:52:58.639644   65519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:52:58.650965   65519 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:52:58.650989   65519 kubeadm.go:157] found existing configuration files:
	
	I0914 00:52:58.651037   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:52:58.660834   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:52:58.660896   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:52:58.670505   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:52:58.679253   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:52:58.679339   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:52:58.689420   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:52:58.698449   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:52:58.698523   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:52:58.707910   65519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:52:58.716706   65519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:52:58.716759   65519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:52:58.725842   65519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:52:58.780499   65519 kubeadm.go:310] W0914 00:52:58.731528     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:58.781527   65519 kubeadm.go:310] W0914 00:52:58.732757     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:52:58.906312   65519 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:52:55.630508   63660 node_ready.go:49] node "flannel-670449" has status "Ready":"True"
	I0914 00:52:55.630539   63660 node_ready.go:38] duration metric: took 8.009665155s for node "flannel-670449" to be "Ready" ...
	I0914 00:52:55.630551   63660 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:52:55.641111   63660 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace to be "Ready" ...
	I0914 00:52:57.647983   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:52:59.730897   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:52:56.234175   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:56.234644   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:56.234675   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:56.234584   66991 retry.go:31] will retry after 926.106578ms: waiting for machine to come up
	I0914 00:52:57.162172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:57.162686   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:57.162715   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:57.162647   66991 retry.go:31] will retry after 1.270446243s: waiting for machine to come up
	I0914 00:52:58.435104   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:58.435636   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:58.435665   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:58.435587   66991 retry.go:31] will retry after 1.16744392s: waiting for machine to come up
	I0914 00:52:59.604970   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:59.605514   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:59.605541   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:59.605457   66991 retry.go:31] will retry after 1.768720127s: waiting for machine to come up
	I0914 00:53:01.300438   57689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (10.095520546s)
	I0914 00:53:01.300489   57689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 00:53:01.306293   57689 kubeadm.go:739] kubelet initialised
	I0914 00:53:01.306326   57689 kubeadm.go:740] duration metric: took 5.824594ms waiting for restarted kubelet to initialise ...
	I0914 00:53:01.306338   57689 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:01.313577   57689 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.323988   57689 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.324021   57689 pod_ready.go:82] duration metric: took 10.407598ms for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.324038   57689 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.330569   57689 pod_ready.go:93] pod "etcd-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.330597   57689 pod_ready.go:82] duration metric: took 6.5482ms for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.330609   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.337384   57689 pod_ready.go:93] pod "kube-apiserver-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:01.337411   57689 pod_ready.go:82] duration metric: took 6.793361ms for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:01.337426   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:03.346882   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:02.147998   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:04.149019   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:01.375890   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:01.376460   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:01.376502   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:01.376418   66991 retry.go:31] will retry after 2.152913439s: waiting for machine to come up
	I0914 00:53:03.530645   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:03.531243   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:03.531267   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:03.531195   66991 retry.go:31] will retry after 2.194352636s: waiting for machine to come up
	I0914 00:53:08.387115   65519 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:53:08.387167   65519 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:53:08.387299   65519 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:53:08.387408   65519 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:53:08.387494   65519 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:53:08.387556   65519 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:53:08.388990   65519 out.go:235]   - Generating certificates and keys ...
	I0914 00:53:08.389061   65519 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:53:08.389122   65519 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:53:08.389212   65519 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:53:08.389275   65519 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:53:08.389364   65519 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:53:08.389435   65519 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:53:08.389502   65519 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:53:08.389660   65519 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-670449 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0914 00:53:08.389732   65519 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:53:08.389930   65519 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-670449 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0914 00:53:08.390001   65519 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:53:08.390069   65519 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:53:08.390130   65519 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:53:08.390218   65519 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:53:08.390273   65519 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:53:08.390326   65519 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:53:08.390373   65519 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:53:08.390446   65519 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:53:08.390512   65519 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:53:08.390602   65519 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:53:08.390692   65519 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:53:08.392172   65519 out.go:235]   - Booting up control plane ...
	I0914 00:53:08.392256   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:53:08.392361   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:53:08.392455   65519 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:53:08.392560   65519 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:53:08.392639   65519 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:53:08.392674   65519 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:53:08.392789   65519 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:53:08.392880   65519 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:53:08.392946   65519 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.829153ms
	I0914 00:53:08.393036   65519 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:53:08.393123   65519 kubeadm.go:310] [api-check] The API server is healthy after 5.00176527s
	I0914 00:53:08.393274   65519 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:53:08.393457   65519 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:53:08.393544   65519 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:53:08.393733   65519 kubeadm.go:310] [mark-control-plane] Marking the node bridge-670449 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:53:08.393785   65519 kubeadm.go:310] [bootstrap-token] Using token: s2sitp.rzxqwa1q7sidpzu1
	I0914 00:53:08.395050   65519 out.go:235]   - Configuring RBAC rules ...
	I0914 00:53:08.395193   65519 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:53:08.395324   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:53:08.395481   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:53:08.395659   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:53:08.395769   65519 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:53:08.395893   65519 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:53:08.396023   65519 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:53:08.396091   65519 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:53:08.396150   65519 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:53:08.396159   65519 kubeadm.go:310] 
	I0914 00:53:08.396233   65519 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:53:08.396245   65519 kubeadm.go:310] 
	I0914 00:53:08.396381   65519 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:53:08.396400   65519 kubeadm.go:310] 
	I0914 00:53:08.396442   65519 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:53:08.396521   65519 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:53:08.396590   65519 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:53:08.396605   65519 kubeadm.go:310] 
	I0914 00:53:08.396662   65519 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:53:08.396674   65519 kubeadm.go:310] 
	I0914 00:53:08.396743   65519 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:53:08.396760   65519 kubeadm.go:310] 
	I0914 00:53:08.396820   65519 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:53:08.396916   65519 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:53:08.396999   65519 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:53:08.397016   65519 kubeadm.go:310] 
	I0914 00:53:08.397100   65519 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:53:08.397193   65519 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:53:08.397205   65519 kubeadm.go:310] 
	I0914 00:53:08.397296   65519 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s2sitp.rzxqwa1q7sidpzu1 \
	I0914 00:53:08.397387   65519 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 00:53:08.397407   65519 kubeadm.go:310] 	--control-plane 
	I0914 00:53:08.397411   65519 kubeadm.go:310] 
	I0914 00:53:08.397480   65519 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:53:08.397486   65519 kubeadm.go:310] 
	I0914 00:53:08.397581   65519 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s2sitp.rzxqwa1q7sidpzu1 \
	I0914 00:53:08.397725   65519 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 00:53:08.397741   65519 cni.go:84] Creating CNI manager for "bridge"
	I0914 00:53:08.400000   65519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 00:53:05.844330   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:08.344248   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:06.649281   63660 pod_ready.go:103] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:08.146989   63660 pod_ready.go:93] pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.147012   63660 pod_ready.go:82] duration metric: took 12.505857681s for pod "coredns-7c65d6cfc9-tm2ff" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.147026   63660 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.150834   63660 pod_ready.go:93] pod "etcd-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.150856   63660 pod_ready.go:82] duration metric: took 3.822883ms for pod "etcd-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.150867   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.155749   63660 pod_ready.go:93] pod "kube-apiserver-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.155766   63660 pod_ready.go:82] duration metric: took 4.892502ms for pod "kube-apiserver-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.155778   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.159630   63660 pod_ready.go:93] pod "kube-controller-manager-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.159654   63660 pod_ready.go:82] duration metric: took 3.851407ms for pod "kube-controller-manager-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.159668   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-x74lz" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.163809   63660 pod_ready.go:93] pod "kube-proxy-x74lz" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.163830   63660 pod_ready.go:82] duration metric: took 4.154557ms for pod "kube-proxy-x74lz" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.163840   63660 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.545884   63660 pod_ready.go:93] pod "kube-scheduler-flannel-670449" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:08.545914   63660 pod_ready.go:82] duration metric: took 382.066072ms for pod "kube-scheduler-flannel-670449" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:08.545926   63660 pod_ready.go:39] duration metric: took 12.915347583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:08.545939   63660 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:53:08.545987   63660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:53:08.561058   63660 api_server.go:72] duration metric: took 21.758482762s to wait for apiserver process to appear ...
	I0914 00:53:08.561088   63660 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:53:08.561111   63660 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0914 00:53:08.566419   63660 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0914 00:53:08.567390   63660 api_server.go:141] control plane version: v1.31.1
	I0914 00:53:08.567415   63660 api_server.go:131] duration metric: took 6.320118ms to wait for apiserver health ...
	I0914 00:53:08.567424   63660 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:53:08.748094   63660 system_pods.go:59] 7 kube-system pods found
	I0914 00:53:08.748128   63660 system_pods.go:61] "coredns-7c65d6cfc9-tm2ff" [31003b21-1677-433c-a949-70b7f1890ac4] Running
	I0914 00:53:08.748137   63660 system_pods.go:61] "etcd-flannel-670449" [faa35475-fa7c-4330-b73a-8960699360aa] Running
	I0914 00:53:08.748142   63660 system_pods.go:61] "kube-apiserver-flannel-670449" [da3a74e7-8805-4ff0-b3ad-374c17a275d9] Running
	I0914 00:53:08.748147   63660 system_pods.go:61] "kube-controller-manager-flannel-670449" [7af6638a-2187-4ed1-ad59-f34fbdc221a6] Running
	I0914 00:53:08.748152   63660 system_pods.go:61] "kube-proxy-x74lz" [ae50b997-6893-4038-80e9-909762ffafdb] Running
	I0914 00:53:08.748156   63660 system_pods.go:61] "kube-scheduler-flannel-670449" [303fa421-2d64-4f1f-9ad7-73d9bf1d193e] Running
	I0914 00:53:08.748160   63660 system_pods.go:61] "storage-provisioner" [325fa443-8cd6-4168-8e04-4be556773543] Running
	I0914 00:53:08.748168   63660 system_pods.go:74] duration metric: took 180.737829ms to wait for pod list to return data ...
	I0914 00:53:08.748178   63660 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:53:08.945680   63660 default_sa.go:45] found service account: "default"
	I0914 00:53:08.945718   63660 default_sa.go:55] duration metric: took 197.531742ms for default service account to be created ...
	I0914 00:53:08.945730   63660 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:53:09.148839   63660 system_pods.go:86] 7 kube-system pods found
	I0914 00:53:09.148866   63660 system_pods.go:89] "coredns-7c65d6cfc9-tm2ff" [31003b21-1677-433c-a949-70b7f1890ac4] Running
	I0914 00:53:09.148901   63660 system_pods.go:89] "etcd-flannel-670449" [faa35475-fa7c-4330-b73a-8960699360aa] Running
	I0914 00:53:09.148907   63660 system_pods.go:89] "kube-apiserver-flannel-670449" [da3a74e7-8805-4ff0-b3ad-374c17a275d9] Running
	I0914 00:53:09.148916   63660 system_pods.go:89] "kube-controller-manager-flannel-670449" [7af6638a-2187-4ed1-ad59-f34fbdc221a6] Running
	I0914 00:53:09.148920   63660 system_pods.go:89] "kube-proxy-x74lz" [ae50b997-6893-4038-80e9-909762ffafdb] Running
	I0914 00:53:09.148924   63660 system_pods.go:89] "kube-scheduler-flannel-670449" [303fa421-2d64-4f1f-9ad7-73d9bf1d193e] Running
	I0914 00:53:09.148929   63660 system_pods.go:89] "storage-provisioner" [325fa443-8cd6-4168-8e04-4be556773543] Running
	I0914 00:53:09.148934   63660 system_pods.go:126] duration metric: took 203.199763ms to wait for k8s-apps to be running ...
	I0914 00:53:09.148943   63660 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:53:09.148988   63660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:53:09.168619   63660 system_svc.go:56] duration metric: took 19.667361ms WaitForService to wait for kubelet
	I0914 00:53:09.168644   63660 kubeadm.go:582] duration metric: took 22.366082733s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:53:09.168660   63660 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:53:09.346444   63660 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:53:09.346476   63660 node_conditions.go:123] node cpu capacity is 2
	I0914 00:53:09.346490   63660 node_conditions.go:105] duration metric: took 177.825586ms to run NodePressure ...
	I0914 00:53:09.346507   63660 start.go:241] waiting for startup goroutines ...
	I0914 00:53:09.346515   63660 start.go:246] waiting for cluster config update ...
	I0914 00:53:09.346527   63660 start.go:255] writing updated cluster config ...
	I0914 00:53:09.346769   63660 ssh_runner.go:195] Run: rm -f paused
	I0914 00:53:09.395441   63660 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:53:09.397402   63660 out.go:177] * Done! kubectl is now configured to use "flannel-670449" cluster and "default" namespace by default
	I0914 00:53:08.401126   65519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 00:53:08.413582   65519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 00:53:08.433052   65519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:53:08.433141   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:08.433154   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-670449 minikube.k8s.io/updated_at=2024_09_14T00_53_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=bridge-670449 minikube.k8s.io/primary=true
	I0914 00:53:08.578648   65519 ops.go:34] apiserver oom_adj: -16
	I0914 00:53:08.578774   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:09.079901   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:09.579694   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:10.079370   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:05.728371   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:05.728822   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:05.728843   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:05.728779   66991 retry.go:31] will retry after 3.501013157s: waiting for machine to come up
	I0914 00:53:09.231390   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:09.232039   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:09.232061   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:09.231992   66991 retry.go:31] will retry after 4.974590479s: waiting for machine to come up
	I0914 00:53:10.345071   57689 pod_ready.go:103] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:12.344418   57689 pod_ready.go:93] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.344442   57689 pod_ready.go:82] duration metric: took 11.007008667s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.344451   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.350053   57689 pod_ready.go:93] pod "kube-proxy-djqjf" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.350076   57689 pod_ready.go:82] duration metric: took 5.618815ms for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.350085   57689 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.355857   57689 pod_ready.go:93] pod "kube-scheduler-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.355879   57689 pod_ready.go:82] duration metric: took 5.787493ms for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.355887   57689 pod_ready.go:39] duration metric: took 11.049537682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:12.355907   57689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:53:12.368790   57689 ops.go:34] apiserver oom_adj: -16
	I0914 00:53:12.368811   57689 kubeadm.go:597] duration metric: took 3m43.322570748s to restartPrimaryControlPlane
	I0914 00:53:12.368820   57689 kubeadm.go:394] duration metric: took 3m43.525907543s to StartCluster
	I0914 00:53:12.368836   57689 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.368936   57689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:53:12.369836   57689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.370082   57689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:53:12.370150   57689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:53:12.370331   57689 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:53:12.371618   57689 out.go:177] * Verifying Kubernetes components...
	I0914 00:53:12.372284   57689 out.go:177] * Enabled addons: 
	I0914 00:53:10.578807   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:11.079442   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:11.579475   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.079216   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.579698   65519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:53:12.701611   65519 kubeadm.go:1113] duration metric: took 4.268537321s to wait for elevateKubeSystemPrivileges
	I0914 00:53:12.701651   65519 kubeadm.go:394] duration metric: took 14.126132772s to StartCluster
	I0914 00:53:12.701671   65519 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.701756   65519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:53:12.702785   65519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:12.703000   65519 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:53:12.703033   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:53:12.703087   65519 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 00:53:12.703181   65519 addons.go:69] Setting storage-provisioner=true in profile "bridge-670449"
	I0914 00:53:12.703200   65519 addons.go:234] Setting addon storage-provisioner=true in "bridge-670449"
	I0914 00:53:12.703199   65519 addons.go:69] Setting default-storageclass=true in profile "bridge-670449"
	I0914 00:53:12.703222   65519 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:53:12.703229   65519 host.go:66] Checking if "bridge-670449" exists ...
	I0914 00:53:12.703232   65519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-670449"
	I0914 00:53:12.703671   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.703697   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.703722   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.703751   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.704767   65519 out.go:177] * Verifying Kubernetes components...
	I0914 00:53:12.706226   65519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:12.719766   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33419
	I0914 00:53:12.720024   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0914 00:53:12.720241   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.720604   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.720797   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.720817   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.721104   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.721131   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.721163   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.721469   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.721638   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.721702   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.721738   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.725490   65519 addons.go:234] Setting addon default-storageclass=true in "bridge-670449"
	I0914 00:53:12.725544   65519 host.go:66] Checking if "bridge-670449" exists ...
	I0914 00:53:12.725929   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.725963   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.739911   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0914 00:53:12.740329   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.741071   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.741100   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.741497   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0914 00:53:12.741650   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.741882   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.741961   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.742457   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.742476   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.743027   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.743703   65519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:53:12.743735   65519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:53:12.745022   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:53:12.746862   65519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:53:12.373191   57689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:12.373831   57689 addons.go:510] duration metric: took 3.682189ms for enable addons: enabled=[]
	I0914 00:53:12.543638   57689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:53:12.562862   57689 node_ready.go:35] waiting up to 6m0s for node "pause-609507" to be "Ready" ...
	I0914 00:53:12.566498   57689 node_ready.go:49] node "pause-609507" has status "Ready":"True"
	I0914 00:53:12.566522   57689 node_ready.go:38] duration metric: took 3.626071ms for node "pause-609507" to be "Ready" ...
	I0914 00:53:12.566531   57689 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:12.570847   57689 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.575554   57689 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.575574   57689 pod_ready.go:82] duration metric: took 4.700003ms for pod "coredns-7c65d6cfc9-jjdnr" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.575583   57689 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.743080   57689 pod_ready.go:93] pod "etcd-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:12.743100   57689 pod_ready.go:82] duration metric: took 167.511528ms for pod "etcd-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.743118   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.142378   57689 pod_ready.go:93] pod "kube-apiserver-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.142411   57689 pod_ready.go:82] duration metric: took 399.284578ms for pod "kube-apiserver-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.142422   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.541914   57689 pod_ready.go:93] pod "kube-controller-manager-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.541943   57689 pod_ready.go:82] duration metric: took 399.514151ms for pod "kube-controller-manager-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.541956   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.943146   57689 pod_ready.go:93] pod "kube-proxy-djqjf" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:13.943169   57689 pod_ready.go:82] duration metric: took 401.20562ms for pod "kube-proxy-djqjf" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.943179   57689 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:12.747973   65519 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:53:12.747987   65519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:53:12.748001   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:53:12.751069   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.751448   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:53:12.751467   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.751714   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:53:12.751940   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:53:12.752153   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:53:12.752296   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:53:12.760286   65519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0914 00:53:12.760749   65519 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:53:12.761313   65519 main.go:141] libmachine: Using API Version  1
	I0914 00:53:12.761339   65519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:53:12.761645   65519 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:53:12.761824   65519 main.go:141] libmachine: (bridge-670449) Calling .GetState
	I0914 00:53:12.763530   65519 main.go:141] libmachine: (bridge-670449) Calling .DriverName
	I0914 00:53:12.763730   65519 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:53:12.763747   65519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:53:12.763763   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHHostname
	I0914 00:53:12.767041   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.767558   65519 main.go:141] libmachine: (bridge-670449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:d3:6e", ip: ""} in network mk-bridge-670449: {Iface:virbr1 ExpiryTime:2024-09-14 01:52:41 +0000 UTC Type:0 Mac:52:54:00:f0:d3:6e Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:bridge-670449 Clientid:01:52:54:00:f0:d3:6e}
	I0914 00:53:12.767585   65519 main.go:141] libmachine: (bridge-670449) DBG | domain bridge-670449 has defined IP address 192.168.50.31 and MAC address 52:54:00:f0:d3:6e in network mk-bridge-670449
	I0914 00:53:12.767841   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHPort
	I0914 00:53:12.767992   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHKeyPath
	I0914 00:53:12.768078   65519 main.go:141] libmachine: (bridge-670449) Calling .GetSSHUsername
	I0914 00:53:12.768176   65519 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/bridge-670449/id_rsa Username:docker}
	I0914 00:53:12.956433   65519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:53:12.956665   65519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:53:12.981089   65519 node_ready.go:35] waiting up to 15m0s for node "bridge-670449" to be "Ready" ...
	I0914 00:53:13.001461   65519 node_ready.go:49] node "bridge-670449" has status "Ready":"True"
	I0914 00:53:13.001489   65519 node_ready.go:38] duration metric: took 20.372776ms for node "bridge-670449" to be "Ready" ...
	I0914 00:53:13.001502   65519 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:13.025192   65519 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-cw297" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:13.187242   65519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:53:13.234703   65519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:53:13.453862   65519 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0914 00:53:13.659692   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.659719   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.660027   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.660045   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.660054   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.660061   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.660317   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.660325   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:13.660332   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.665507   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:13.665531   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:13.665791   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:13.665810   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:13.962125   65519 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-670449" context rescaled to 1 replicas
	I0914 00:53:14.296626   65519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061879112s)
	I0914 00:53:14.296685   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:14.296700   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:14.297714   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:14.297739   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:14.297754   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:14.297763   65519 main.go:141] libmachine: Making call to close driver server
	I0914 00:53:14.297770   65519 main.go:141] libmachine: (bridge-670449) Calling .Close
	I0914 00:53:14.298051   65519 main.go:141] libmachine: (bridge-670449) DBG | Closing plugin on server side
	I0914 00:53:14.298098   65519 main.go:141] libmachine: Successfully made call to close driver server
	I0914 00:53:14.298116   65519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 00:53:14.300137   65519 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 00:53:14.342880   57689 pod_ready.go:93] pod "kube-scheduler-pause-609507" in "kube-system" namespace has status "Ready":"True"
	I0914 00:53:14.342906   57689 pod_ready.go:82] duration metric: took 399.720014ms for pod "kube-scheduler-pause-609507" in "kube-system" namespace to be "Ready" ...
	I0914 00:53:14.342917   57689 pod_ready.go:39] duration metric: took 1.776376455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:53:14.342935   57689 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:53:14.342993   57689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:53:14.362069   57689 api_server.go:72] duration metric: took 1.991949805s to wait for apiserver process to appear ...
	I0914 00:53:14.362098   57689 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:53:14.362121   57689 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0914 00:53:14.366529   57689 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0914 00:53:14.367413   57689 api_server.go:141] control plane version: v1.31.1
	I0914 00:53:14.367438   57689 api_server.go:131] duration metric: took 5.332951ms to wait for apiserver health ...
	I0914 00:53:14.367449   57689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:53:14.544956   57689 system_pods.go:59] 6 kube-system pods found
	I0914 00:53:14.544989   57689 system_pods.go:61] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running
	I0914 00:53:14.544997   57689 system_pods.go:61] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:53:14.545002   57689 system_pods.go:61] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running
	I0914 00:53:14.545008   57689 system_pods.go:61] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running
	I0914 00:53:14.545014   57689 system_pods.go:61] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running
	I0914 00:53:14.545019   57689 system_pods.go:61] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:53:14.545027   57689 system_pods.go:74] duration metric: took 177.570103ms to wait for pod list to return data ...
	I0914 00:53:14.545040   57689 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:53:14.743308   57689 default_sa.go:45] found service account: "default"
	I0914 00:53:14.743342   57689 default_sa.go:55] duration metric: took 198.291849ms for default service account to be created ...
	I0914 00:53:14.743355   57689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:53:14.944071   57689 system_pods.go:86] 6 kube-system pods found
	I0914 00:53:14.944097   57689 system_pods.go:89] "coredns-7c65d6cfc9-jjdnr" [17391162-c95e-489d-825a-a869da462757] Running
	I0914 00:53:14.944103   57689 system_pods.go:89] "etcd-pause-609507" [3a57c2e5-009f-4f67-a8a2-0eeaf0a939a8] Running
	I0914 00:53:14.944106   57689 system_pods.go:89] "kube-apiserver-pause-609507" [35a9e7ba-4d49-486b-b21c-587b2cc63010] Running
	I0914 00:53:14.944110   57689 system_pods.go:89] "kube-controller-manager-pause-609507" [200bcfc3-e090-4792-9c94-7f448edd86be] Running
	I0914 00:53:14.944113   57689 system_pods.go:89] "kube-proxy-djqjf" [ca94aecb-0013-45fc-b541-7d11e5f7089e] Running
	I0914 00:53:14.944116   57689 system_pods.go:89] "kube-scheduler-pause-609507" [64772355-1ba0-46f4-a07d-9db6aee07b73] Running
	I0914 00:53:14.944125   57689 system_pods.go:126] duration metric: took 200.763397ms to wait for k8s-apps to be running ...
	I0914 00:53:14.944134   57689 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:53:14.944183   57689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:53:14.959855   57689 system_svc.go:56] duration metric: took 15.71469ms WaitForService to wait for kubelet
	I0914 00:53:14.959881   57689 kubeadm.go:582] duration metric: took 2.58977181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:53:14.959897   57689 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:53:15.142221   57689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 00:53:15.142244   57689 node_conditions.go:123] node cpu capacity is 2
	I0914 00:53:15.142253   57689 node_conditions.go:105] duration metric: took 182.351283ms to run NodePressure ...
	I0914 00:53:15.142265   57689 start.go:241] waiting for startup goroutines ...
	I0914 00:53:15.142274   57689 start.go:246] waiting for cluster config update ...
	I0914 00:53:15.142284   57689 start.go:255] writing updated cluster config ...
	I0914 00:53:15.142577   57689 ssh_runner.go:195] Run: rm -f paused
	I0914 00:53:15.191743   57689 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:53:15.193710   57689 out.go:177] * Done! kubectl is now configured to use "pause-609507" cluster and "default" namespace by default
	W0914 00:53:15.199342   57689 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 9d4b0fec-5fe5-4cd6-a080-7e3a4dd20052
	I0914 00:53:14.301687   65519 addons.go:510] duration metric: took 1.598584871s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0914 00:53:15.031097   65519 pod_ready.go:103] pod "coredns-7c65d6cfc9-cw297" in "kube-system" namespace has status "Ready":"False"
	I0914 00:53:14.209543   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.210118   66801 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 00:53:14.210146   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.210154   66801 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 00:53:14.210593   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084
	I0914 00:53:14.296971   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 00:53:14.296997   66801 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 00:53:14.297027   66801 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 00:53:14.302432   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.302836   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.302946   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.303093   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 00:53:14.303124   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 00:53:14.303154   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:53:14.303167   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 00:53:14.303184   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 00:53:14.431746   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 00:53:14.432030   66801 main.go:141] libmachine: (old-k8s-version-431084) KVM machine creation complete!
	I0914 00:53:14.432386   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:53:14.432983   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:14.433194   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:14.433352   66801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 00:53:14.433368   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 00:53:14.434702   66801 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 00:53:14.434718   66801 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 00:53:14.434725   66801 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 00:53:14.434733   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.436931   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.437354   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.437386   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.437454   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.437635   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.437770   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.437923   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.438141   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.438373   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.438387   66801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 00:53:14.543589   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:53:14.543616   66801 main.go:141] libmachine: Detecting the provisioner...
	I0914 00:53:14.543627   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.546738   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.547137   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.547170   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.547318   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.547498   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.547675   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.547822   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.547957   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.548166   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.548178   66801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 00:53:14.649639   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 00:53:14.649735   66801 main.go:141] libmachine: found compatible host: buildroot
	I0914 00:53:14.649745   66801 main.go:141] libmachine: Provisioning with buildroot...
	I0914 00:53:14.649755   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.650015   66801 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 00:53:14.650053   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.650225   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.653638   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.654117   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.654141   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.654349   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.654544   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.654724   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.654888   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.655069   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.655291   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.655303   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 00:53:14.775286   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 00:53:14.775327   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.778779   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.779250   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.779282   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.779593   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.779844   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.780078   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.780275   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.780542   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.780766   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.780793   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:53:14.901057   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:53:14.901087   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:53:14.901110   66801 buildroot.go:174] setting up certificates
	I0914 00:53:14.901122   66801 provision.go:84] configureAuth start
	I0914 00:53:14.901132   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.901438   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:14.904596   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.905161   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.905192   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.905360   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.907972   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.908410   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.908440   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.908581   66801 provision.go:143] copyHostCerts
	I0914 00:53:14.908639   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:53:14.908649   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:53:14.908717   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:53:14.908801   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:53:14.908811   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:53:14.908841   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:53:14.908916   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:53:14.908926   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:53:14.908958   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:53:14.909027   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 00:53:15.268390   66801 provision.go:177] copyRemoteCerts
	I0914 00:53:15.268453   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:53:15.268476   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.271577   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.272037   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.272068   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.272265   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.272502   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.272693   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.272826   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.353915   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:53:15.378403   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 00:53:15.404566   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:53:15.434293   66801 provision.go:87] duration metric: took 533.159761ms to configureAuth
	I0914 00:53:15.434323   66801 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:53:15.434495   66801 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 00:53:15.434609   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.437445   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.437919   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.437943   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.438119   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.438318   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.438489   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.438658   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.438839   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:15.439095   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:15.439112   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:53:15.679700   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:53:15.679730   66801 main.go:141] libmachine: Checking connection to Docker...
	I0914 00:53:15.679738   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetURL
	I0914 00:53:15.680936   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using libvirt version 6000000
	I0914 00:53:15.683775   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.684363   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.684396   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.684581   66801 main.go:141] libmachine: Docker is up and running!
	I0914 00:53:15.684597   66801 main.go:141] libmachine: Reticulating splines...
	I0914 00:53:15.684632   66801 client.go:171] duration metric: took 24.334249123s to LocalClient.Create
	I0914 00:53:15.684663   66801 start.go:167] duration metric: took 24.334349462s to libmachine.API.Create "old-k8s-version-431084"
	I0914 00:53:15.684675   66801 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 00:53:15.684691   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:53:15.684716   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.684975   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:53:15.684997   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.687573   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.688050   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.688079   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.688236   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.688404   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.688524   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.688623   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.772846   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:53:15.778340   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:53:15.778364   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:53:15.778428   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:53:15.778535   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:53:15.778664   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:53:15.791458   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:53:15.825391   66801 start.go:296] duration metric: took 140.700059ms for postStartSetup
	I0914 00:53:15.825448   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:53:15.826222   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:15.829799   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.830385   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.830407   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.830843   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:53:15.831111   66801 start.go:128] duration metric: took 24.502405524s to createHost
	I0914 00:53:15.831137   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.834298   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.834686   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.834720   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.834972   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.835229   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.835381   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.835509   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.835830   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:15.836040   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:15.836055   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:53:15.944516   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275195.900159706
	
	I0914 00:53:15.944538   66801 fix.go:216] guest clock: 1726275195.900159706
	I0914 00:53:15.944546   66801 fix.go:229] Guest: 2024-09-14 00:53:15.900159706 +0000 UTC Remote: 2024-09-14 00:53:15.831122568 +0000 UTC m=+40.152976203 (delta=69.037138ms)
	I0914 00:53:15.944569   66801 fix.go:200] guest clock delta is within tolerance: 69.037138ms
	I0914 00:53:15.944575   66801 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 24.616036705s
	I0914 00:53:15.944597   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.944866   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:15.947649   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.948084   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.948128   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.948304   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.948809   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.949030   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.949134   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:53:15.949196   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.949228   66801 ssh_runner.go:195] Run: cat /version.json
	I0914 00:53:15.949253   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.951945   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952016   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952322   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.952347   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952376   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.952394   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952556   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.952725   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.952736   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.952917   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.952922   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.953076   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.953078   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.953250   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:16.028572   66801 ssh_runner.go:195] Run: systemctl --version
	I0914 00:53:16.068487   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:53:16.244562   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:53:16.251813   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:53:16.251881   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:53:16.272029   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:53:16.272060   66801 start.go:495] detecting cgroup driver to use...
	I0914 00:53:16.272133   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:53:16.290364   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:53:16.306421   66801 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:53:16.306490   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:53:16.321586   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:53:16.340840   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:53:16.474617   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:53:16.627816   66801 docker.go:233] disabling docker service ...
	I0914 00:53:16.627890   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:53:16.645746   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:53:16.664121   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:53:16.821046   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:53:16.976125   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:53:16.994901   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:53:17.021769   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 00:53:17.021830   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.035707   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:53:17.035799   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.050119   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.079610   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.097149   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:53:17.114899   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:53:17.128149   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:53:17.128206   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:53:17.143819   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:53:17.155977   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:17.295241   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:53:17.407809   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:53:17.407879   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:53:17.413233   66801 start.go:563] Will wait 60s for crictl version
	I0914 00:53:17.413299   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:17.417011   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:53:17.458437   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:53:17.458536   66801 ssh_runner.go:195] Run: crio --version
	I0914 00:53:17.493914   66801 ssh_runner.go:195] Run: crio --version
	I0914 00:53:17.537043   66801 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	
	
	==> CRI-O <==
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.376227395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275198376189106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d90d61fd-44fc-4d01-8502-0ce31882ec91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.385455746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b68f97c-2649-476e-9d12-77fd3500f5b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.385649139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b68f97c-2649-476e-9d12-77fd3500f5b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.386027056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b68f97c-2649-476e-9d12-77fd3500f5b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.462020884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1abb1221-8932-4426-adc5-69886958dbb7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.462109753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1abb1221-8932-4426-adc5-69886958dbb7 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.464443329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a553979-4e2c-46d0-894a-999ff7adf8ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.465264203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275198465225920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a553979-4e2c-46d0-894a-999ff7adf8ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.467350270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4daeecfd-ac7f-4418-852f-dd56f9e9f9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.467414638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4daeecfd-ac7f-4418-852f-dd56f9e9f9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.468667263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4daeecfd-ac7f-4418-852f-dd56f9e9f9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.532522025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08187f6e-ec78-485d-93c8-48380900a7d0 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.532700678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08187f6e-ec78-485d-93c8-48380900a7d0 name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.534225973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e4158bd-cfca-420e-b1a7-c291446413cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.535027203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275198534992392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e4158bd-cfca-420e-b1a7-c291446413cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.535912019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a9848ec-95f5-47b5-9db6-e6e0adaa00f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.536168118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a9848ec-95f5-47b5-9db6-e6e0adaa00f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.536506182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a9848ec-95f5-47b5-9db6-e6e0adaa00f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.599290459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2f15c1f-264c-449e-a0f7-5d42d413fcee name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.599394503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2f15c1f-264c-449e-a0f7-5d42d413fcee name=/runtime.v1.RuntimeService/Version
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.601500862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99acfbe2-5526-4c57-8b69-a3b44d55f0c3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.606621925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275198606536467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99acfbe2-5526-4c57-8b69-a3b44d55f0c3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.609396187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e71a471-2311-456b-8f50-7dc19fde5128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.609636953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e71a471-2311-456b-8f50-7dc19fde5128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 00:53:18 pause-609507 crio[2371]: time="2024-09-14 00:53:18.610022079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275179030386810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44,PodSandboxId:d8e5011ba0b253440bcbeea669d78686db8b5d428a70b9b322806bda0c90618f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275171797731086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275171653048515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969,PodSandboxId:7cfac8b33aa9b4d53d224440762a9fa7e42d870e7e29df5ad85af050ed2baa96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726275136028247003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-609507,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 8d87185065e5c5b732f996180cc6b281,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275102260295533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479
195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275102246864757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9,PodSandboxId:b710b132614c242dace01e0859ed5715565bfddf0c16f4d1b49874f0e2ecff3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726274972282939424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7,PodSandboxId:07b81ac60df451181a7b7223cd5dca30291bf142e117ee7b210331b209ebfcfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726274968650374427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jjdnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17391162-c95e-489d-825a-a869da462757,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688,PodSandboxId:bafa2d3bb69f24f06964acfb77f1ea2363176f1ad2e5cf5654fbd2cea9be4d89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726274968440823177,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9b11b4ec540257a59479195eaf4d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1,PodSandboxId:5c25f762714ec64d4afe55489a8e9cd3e171f851adf5ba2677d84ef9ee76f9b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726274968226374547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-6095
07,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2631685558a653ccf0023b0a3630f45,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7,PodSandboxId:e98833148ac1f60788b21a11812f4cc59425395349b446e0a014d658480eccf4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726274914958437949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djqjf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: ca94aecb-0013-45fc-b541-7d11e5f7089e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179,PodSandboxId:160185de1867f3ae551e07cfa6ff6c771a826977444c321eedd0955eb3f281f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726274904290355660,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-609507,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e799416ae93e2f6eb005dc1e61fbd714,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e71a471-2311-456b-8f50-7dc19fde5128 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dafed29445983       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago       Running             kube-controller-manager   4                   7cfac8b33aa9b       kube-controller-manager-pause-609507
	9580d0349e197       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   26 seconds ago       Running             kube-proxy                1                   d8e5011ba0b25       kube-proxy-djqjf
	a743e8a4eff31       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   27 seconds ago       Running             coredns                   2                   07b81ac60df45       coredns-7c65d6cfc9-jjdnr
	0fb6adc7c77a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   3                   7cfac8b33aa9b       kube-controller-manager-pause-609507
	8cffdea91bc4f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   bafa2d3bb69f2       etcd-pause-609507
	9b0bfff8e7f47       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            2                   5c25f762714ec       kube-scheduler-pause-609507
	ae425e0fa034b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   3 minutes ago        Running             kube-apiserver            1                   b710b132614c2       kube-apiserver-pause-609507
	6093ecd4b31b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 minutes ago        Exited              coredns                   1                   07b81ac60df45       coredns-7c65d6cfc9-jjdnr
	174bce9f1ea9d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   3 minutes ago        Exited              etcd                      1                   bafa2d3bb69f2       etcd-pause-609507
	29b9d75e659af       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   3 minutes ago        Exited              kube-scheduler            1                   5c25f762714ec       kube-scheduler-pause-609507
	eafac013bbe30       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   4 minutes ago        Exited              kube-proxy                0                   e98833148ac1f       kube-proxy-djqjf
	429b60dcec5b5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   4 minutes ago        Exited              kube-apiserver            0                   160185de1867f       kube-apiserver-pause-609507
	
	
	==> coredns [6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33108 - 35724 "HINFO IN 4099614065336019101.5833173758431508590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014879208s
	
	
	==> coredns [a743e8a4eff31d968cd522dd5d17c38a0e4cdb50d18e40c763c1f1b3ea0f1467] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36818 - 55484 "HINFO IN 4869981132163530117.679535294579518781. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009512125s
	
	
	==> describe nodes <==
	Name:               pause-609507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-609507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=pause-609507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_48_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:48:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-609507
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:52:42 +0000   Sat, 14 Sep 2024 00:48:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    pause-609507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 16805f4643924281a7d376302d49bd1e
	  System UUID:                16805f46-4392-4281-a7d3-76302d49bd1e
	  Boot ID:                    c3cec0dd-95a4-4e58-b1a0-71ec99d4e6ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jjdnr                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m44s
	  kube-system                 etcd-pause-609507                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         4m49s
	  kube-system                 kube-apiserver-pause-609507             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-controller-manager-pause-609507    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-proxy-djqjf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-scheduler-pause-609507             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26s                    kube-proxy       
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s                  kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s                  kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s                  kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m48s                  kubelet          Node pause-609507 status is now: NodeReady
	  Normal  RegisteredNode           4m44s                  node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	  Normal  NodeHasSufficientMemory  40s (x6 over 3m36s)    kubelet          Node pause-609507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x6 over 3m36s)    kubelet          Node pause-609507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x6 over 3m36s)    kubelet          Node pause-609507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s                    node-controller  Node pause-609507 event: Registered Node pause-609507 in Controller
	
	
	==> dmesg <==
	[  +0.064277] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063977] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.245186] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143717] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.339936] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +4.042152] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +3.928882] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.064208] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990603] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.122671] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.496638] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.288036] systemd-fstab-generator[1400]: Ignoring "noauto" option for root device
	[ +11.387294] kauditd_printk_skb: 97 callbacks suppressed
	[Sep14 00:49] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +0.141829] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.165517] systemd-fstab-generator[2250]: Ignoring "noauto" option for root device
	[  +0.145436] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.272556] systemd-fstab-generator[2290]: Ignoring "noauto" option for root device
	[  +0.948078] systemd-fstab-generator[2505]: Ignoring "noauto" option for root device
	[  +4.480849] kauditd_printk_skb: 187 callbacks suppressed
	[  +9.496509] systemd-fstab-generator[3169]: Ignoring "noauto" option for root device
	[Sep14 00:51] kauditd_printk_skb: 20 callbacks suppressed
	[Sep14 00:52] kauditd_printk_skb: 5 callbacks suppressed
	[ +58.420360] kauditd_printk_skb: 7 callbacks suppressed
	[Sep14 00:53] systemd-fstab-generator[4080]: Ignoring "noauto" option for root device
	
	
	==> etcd [174bce9f1ea9d3e319f177ed6e4921d63effa30b1e0c794bb88ad09815238688] <==
	{"level":"info","ts":"2024-09-14T00:49:30.609266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:49:30.609325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgPreVoteResp from b2f9167931180af7 at term 2"}
	{"level":"info","ts":"2024-09-14T00:49:30.609353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgVoteResp from b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.609375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2f9167931180af7 elected leader b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:49:30.614726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:49:30.614681Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2f9167931180af7","local-member-attributes":"{Name:pause-609507 ClientURLs:[https://192.168.39.112:2379]}","request-path":"/0/members/b2f9167931180af7/attributes","cluster-id":"694778b4375dcf94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:49:30.615822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:49:30.615943Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:49:30.616179Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:49:30.616202Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:49:30.616687Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:49:30.617525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:49:30.617528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.112:2379"}
	{"level":"info","ts":"2024-09-14T00:49:39.075200Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T00:49:39.075285Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-609507","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.112:2380"],"advertise-client-urls":["https://192.168.39.112:2379"]}
	{"level":"warn","ts":"2024-09-14T00:49:39.075388Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.075476Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.094022Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.112:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T00:49:39.094149Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.112:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T00:49:39.094244Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2f9167931180af7","current-leader-member-id":"b2f9167931180af7"}
	{"level":"info","ts":"2024-09-14T00:49:39.101781Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-09-14T00:49:39.101895Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-09-14T00:49:39.101906Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-609507","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.112:2380"],"advertise-client-urls":["https://192.168.39.112:2379"]}
	
	
	==> etcd [8cffdea91bc4f9d66b4d46e28128818abbbc19f663f23ad0d633dd3ff9ab9a73] <==
	{"level":"info","ts":"2024-09-14T00:51:45.300477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgPreVoteResp from b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-09-14T00:51:45.300792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became candidate at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgVoteResp from b2f9167931180af7 at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became leader at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.300880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2f9167931180af7 elected leader b2f9167931180af7 at term 4"}
	{"level":"info","ts":"2024-09-14T00:51:45.306693Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2f9167931180af7","local-member-attributes":"{Name:pause-609507 ClientURLs:[https://192.168.39.112:2379]}","request-path":"/0/members/b2f9167931180af7/attributes","cluster-id":"694778b4375dcf94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:51:45.306783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:51:45.307141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:51:45.307177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:51:45.307199Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:51:45.308501Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:51:45.309798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:51:45.308499Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:51:45.311114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.112:2379"}
	{"level":"warn","ts":"2024-09-14T00:52:01.749074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.592178ms","expected-duration":"100ms","prefix":"","request":"header:<ID:790260711076673286 > lease_revoke:<id:0af791ee019db111>","response":"size:28"}
	{"level":"info","ts":"2024-09-14T00:52:59.299300Z","caller":"traceutil/trace.go:171","msg":"trace[1998006713] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"272.480183ms","start":"2024-09-14T00:52:59.026790Z","end":"2024-09-14T00:52:59.299271Z","steps":["trace[1998006713] 'process raft request'  (duration: 272.35199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:52:59.850297Z","caller":"traceutil/trace.go:171","msg":"trace[1890527220] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"117.482678ms","start":"2024-09-14T00:52:59.732793Z","end":"2024-09-14T00:52:59.850276Z","steps":["trace[1890527220] 'process raft request'  (duration: 117.365011ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T00:53:00.173674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.93685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-controller-manager-pause-609507.17f4f60f921a2fb4\" ","response":"range_response_count:1 size:807"}
	{"level":"info","ts":"2024-09-14T00:53:00.173855Z","caller":"traceutil/trace.go:171","msg":"trace[810928997] range","detail":"{range_begin:/registry/events/kube-system/kube-controller-manager-pause-609507.17f4f60f921a2fb4; range_end:; response_count:1; response_revision:544; }","duration":"101.166015ms","start":"2024-09-14T00:53:00.072670Z","end":"2024-09-14T00:53:00.173836Z","steps":["trace[810928997] 'range keys from in-memory index tree'  (duration: 100.773989ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:53:00.363826Z","caller":"traceutil/trace.go:171","msg":"trace[1431055995] linearizableReadLoop","detail":"{readStateIndex:585; appliedIndex:584; }","duration":"142.851377ms","start":"2024-09-14T00:53:00.220958Z","end":"2024-09-14T00:53:00.363809Z","steps":["trace[1431055995] 'read index received'  (duration: 142.683698ms)","trace[1431055995] 'applied index is now lower than readState.Index'  (duration: 167.163µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T00:53:00.364074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.098425ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T00:53:00.364130Z","caller":"traceutil/trace.go:171","msg":"trace[62468103] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:545; }","duration":"143.182159ms","start":"2024-09-14T00:53:00.220937Z","end":"2024-09-14T00:53:00.364119Z","steps":["trace[62468103] 'agreement among raft nodes before linearized reading'  (duration: 143.056423ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:53:00.364713Z","caller":"traceutil/trace.go:171","msg":"trace[279047246] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"188.472695ms","start":"2024-09-14T00:53:00.176223Z","end":"2024-09-14T00:53:00.364696Z","steps":["trace[279047246] 'process raft request'  (duration: 187.47662ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:53:19 up 5 min,  0 users,  load average: 0.09, 0.25, 0.15
	Linux pause-609507 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [429b60dcec5b51cf2e5d63181ee5f8b6126a3a8d0f46413b458631fe36ac5179] <==
	I0914 00:48:28.779685       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 00:48:29.034820       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:48:29.622516       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:48:29.655684       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 00:48:29.681591       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:48:34.487095       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0914 00:48:34.587308       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0914 00:49:20.046711       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0914 00:49:20.062850       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0914 00:49:20.066210       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066489       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066627       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066692       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.066692       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.067873       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068269       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068359       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.068785       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.069506       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.069790       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071214       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071532       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071866       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.071954       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 00:49:20.072450       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ae425e0fa034b2f3c7d1f64ae7c8fd89ddb62bc94ca6bee659da793642e68bb9] <==
	E0914 00:52:12.611670       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.649806ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-609507" result=null
	E0914 00:52:16.199268       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.546µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:16.203301       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.38µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:29.607065       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:29.609051       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.610253       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.611619       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:29.613090       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.469222ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-609507" result=null
	E0914 00:52:31.985858       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.29µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:35.175779       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.3µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	E0914 00:52:42.201208       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:42.202523       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.203692       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.204847       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:42.206097       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.019959ms" method="GET" path="/api/v1/nodes/pause-609507" result=null
	E0914 00:52:44.058743       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0914 00:52:44.060466       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.061649       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.062809       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0914 00:52:44.064077       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.101763ms" method="GET" path="/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication" result=null
	I0914 00:52:54.014078       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 00:52:54.045030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 00:52:54.124881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 00:53:01.255412       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:53:01.265918       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969] <==
	I0914 00:52:16.621743       1 serving.go:386] Generated self-signed cert in-memory
	I0914 00:52:17.070221       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 00:52:17.070340       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:17.072218       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 00:52:17.072370       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 00:52:17.072870       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 00:52:17.073465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 00:52:31.089325       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [dafed29445983c5ed09dfd7361d8d3d2e0007dab6894fb03be2993ef654f1cc2] <==
	I0914 00:53:01.865172       1 shared_informer.go:320] Caches are synced for TTL
	I0914 00:53:01.867393       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0914 00:53:01.869758       1 shared_informer.go:320] Caches are synced for GC
	I0914 00:53:01.871024       1 shared_informer.go:320] Caches are synced for namespace
	I0914 00:53:01.872279       1 shared_informer.go:320] Caches are synced for daemon sets
	I0914 00:53:01.874748       1 shared_informer.go:320] Caches are synced for deployment
	I0914 00:53:01.904864       1 shared_informer.go:320] Caches are synced for service account
	I0914 00:53:01.978651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="159.228042ms"
	I0914 00:53:01.978736       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0914 00:53:01.979539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.696µs"
	I0914 00:53:01.981684       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:53:01.998850       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 00:53:02.013259       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0914 00:53:02.013363       1 shared_informer.go:320] Caches are synced for endpoint
	I0914 00:53:02.015823       1 shared_informer.go:320] Caches are synced for disruption
	I0914 00:53:02.023741       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0914 00:53:02.065190       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0914 00:53:02.115094       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 00:53:02.115210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 00:53:02.115860       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 00:53:02.116122       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 00:53:02.120091       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0914 00:53:02.522754       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 00:53:02.531326       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 00:53:02.531437       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9580d0349e197b09b4a9a3eeacc6b88c72e9c94c4d0ae1e5749bc11c54b50b44] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:52:52.028356       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:52:52.038800       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.112"]
	E0914 00:52:52.038890       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:52:52.083031       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:52:52.083106       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:52:52.083153       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:52:52.086995       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:52:52.087593       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:52:52.087630       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:52.089634       1 config.go:199] "Starting service config controller"
	I0914 00:52:52.089705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:52:52.089766       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:52:52.089789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:52:52.092308       1 config.go:328] "Starting node config controller"
	I0914 00:52:52.098186       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:52:52.098198       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:52:52.190908       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:52:52.190962       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [eafac013bbe3081155ff2e72dceb0cfd18f9e273e06ea25481783163fec51cc7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 00:48:35.518640       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 00:48:35.568954       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.112"]
	E0914 00:48:35.569498       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:48:35.672805       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 00:48:35.672847       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 00:48:35.672882       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:48:35.752465       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:48:35.754635       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:48:35.754671       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:48:35.767833       1 config.go:199] "Starting service config controller"
	I0914 00:48:35.769335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:48:35.769649       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:48:35.769694       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:48:35.776907       1 config.go:328] "Starting node config controller"
	I0914 00:48:35.776935       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:48:35.970470       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:48:35.977071       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:48:35.976574       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29b9d75e659afae15c49842fa1f34e0985fa61b6e742b7acd97e475bcbd98dc1] <==
	W0914 00:49:34.113071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 00:49:34.113101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:49:34.113203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:49:34.113355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:49:34.113460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:49:34.113612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 00:49:34.113759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:49:34.113869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.113974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:49:34.114079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.114139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:49:34.114170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.115769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:49:34.115863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:49:34.116459       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:49:34.121645       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 00:49:38.384639       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 00:49:39.213637       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0914 00:49:39.213813       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9b0bfff8e7f47e5d8745b9e0744ddfd3b80d8d1751915b2cee0988c7ece867ac] <==
	I0914 00:51:43.369515       1 serving.go:386] Generated self-signed cert in-memory
	W0914 00:52:44.060163       1 authentication.go:370] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	W0914 00:52:44.060617       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 00:52:44.060682       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 00:52:44.077518       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 00:52:44.077673       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:52:44.080381       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 00:52:44.080464       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 00:52:44.080636       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 00:52:44.080761       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 00:52:44.180706       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:52:32 pause-609507 kubelet[3176]: I0914 00:52:32.798781    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:52:32 pause-609507 kubelet[3176]: E0914 00:52:32.798954    3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-609507_kube-system(8d87185065e5c5b732f996180cc6b281)\"" pod="kube-system/kube-controller-manager-pause-609507" podUID="8d87185065e5c5b732f996180cc6b281"
	Sep 14 00:52:35 pause-609507 kubelet[3176]: E0914 00:52:35.176616    3176 kubelet_node_status.go:95] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="pause-609507"
	Sep 14 00:52:38 pause-609507 kubelet[3176]: I0914 00:52:38.379461    3176 kubelet_node_status.go:72] "Attempting to register node" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.028933    3176 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 00:52:42 pause-609507 kubelet[3176]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 00:52:42 pause-609507 kubelet[3176]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.101812    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275162101281245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: E0914 00:52:42.101930    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275162101281245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442245    3176 kubelet_node_status.go:111] "Node was previously registered" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442473    3176 kubelet_node_status.go:75] "Successfully registered node" node="pause-609507"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.442612    3176 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 00:52:42 pause-609507 kubelet[3176]: I0914 00:52:42.443687    3176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 00:52:46 pause-609507 kubelet[3176]: I0914 00:52:46.015737    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:52:46 pause-609507 kubelet[3176]: E0914 00:52:46.016345    3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-609507_kube-system(8d87185065e5c5b732f996180cc6b281)\"" pod="kube-system/kube-controller-manager-pause-609507" podUID="8d87185065e5c5b732f996180cc6b281"
	Sep 14 00:52:51 pause-609507 kubelet[3176]: I0914 00:52:51.616624    3176 scope.go:117] "RemoveContainer" containerID="6093ecd4b31b47b41c918e569e8b502176ac2fa763bb8a241c88d332d2d9f4e7"
	Sep 14 00:52:52 pause-609507 kubelet[3176]: E0914 00:52:52.110852    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275172107933132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:52 pause-609507 kubelet[3176]: E0914 00:52:52.111024    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275172107933132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:52:59 pause-609507 kubelet[3176]: I0914 00:52:59.015267    3176 scope.go:117] "RemoveContainer" containerID="0fb6adc7c77a7b1de83b78a8d6bad85c9baa6475c1c558e8caef835b2baf2969"
	Sep 14 00:53:02 pause-609507 kubelet[3176]: E0914 00:53:02.114640    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275182114242364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:02 pause-609507 kubelet[3176]: E0914 00:53:02.114673    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275182114242364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:12 pause-609507 kubelet[3176]: E0914 00:53:12.117125    3176 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275192116637626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:12 pause-609507 kubelet[3176]: E0914 00:53:12.117166    3176 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275192116637626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-609507 -n pause-609507
helpers_test.go:261: (dbg) Run:  kubectl --context pause-609507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (241.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (285.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m45.163712235s)

                                                
                                                
-- stdout --
	* [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:52:35.724587   66801 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:52:35.724870   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.724882   66801 out.go:358] Setting ErrFile to fd 2...
	I0914 00:52:35.724887   66801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:35.725072   66801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:52:35.725683   66801 out.go:352] Setting JSON to false
	I0914 00:52:35.726845   66801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5702,"bootTime":1726269454,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:52:35.726957   66801 start.go:139] virtualization: kvm guest
	I0914 00:52:35.729922   66801 out.go:177] * [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:52:35.731826   66801 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:52:35.731857   66801 notify.go:220] Checking for updates...
	I0914 00:52:35.735497   66801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:52:35.737400   66801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:52:35.738944   66801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:35.740627   66801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:52:35.742223   66801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:52:35.744593   66801 config.go:182] Loaded profile config "bridge-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744763   66801 config.go:182] Loaded profile config "flannel-670449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.744950   66801 config.go:182] Loaded profile config "pause-609507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:52:35.745082   66801 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:52:35.792655   66801 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:52:35.794325   66801 start.go:297] selected driver: kvm2
	I0914 00:52:35.794345   66801 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:52:35.794357   66801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:52:35.795353   66801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.795460   66801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:52:35.812779   66801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:52:35.812843   66801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:52:35.813119   66801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:52:35.813151   66801 cni.go:84] Creating CNI manager for ""
	I0914 00:52:35.813197   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:52:35.813206   66801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 00:52:35.813298   66801 start.go:340] cluster config:
	{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:52:35.813422   66801 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:52:35.815527   66801 out.go:177] * Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	I0914 00:52:35.816967   66801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:52:35.817022   66801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 00:52:35.817033   66801 cache.go:56] Caching tarball of preloaded images
	I0914 00:52:35.817165   66801 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:52:35.817181   66801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 00:52:35.817348   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:52:35.817378   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json: {Name:mk66cd4353dae42258dd8e2fe6f383f65dc09589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:52:35.817576   66801 start.go:360] acquireMachinesLock for old-k8s-version-431084: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:52:51.328508   66801 start.go:364] duration metric: took 15.510868976s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 00:52:51.328574   66801 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:52:51.328690   66801 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 00:52:51.330804   66801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 00:52:51.331044   66801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:52:51.331098   66801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:52:51.348179   66801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0914 00:52:51.348690   66801 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:52:51.349350   66801 main.go:141] libmachine: Using API Version  1
	I0914 00:52:51.349375   66801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:52:51.349795   66801 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:52:51.349981   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:52:51.350148   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:52:51.350313   66801 start.go:159] libmachine.API.Create for "old-k8s-version-431084" (driver="kvm2")
	I0914 00:52:51.350346   66801 client.go:168] LocalClient.Create starting
	I0914 00:52:51.350381   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem
	I0914 00:52:51.350426   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350450   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350517   66801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem
	I0914 00:52:51.350545   66801 main.go:141] libmachine: Decoding PEM data...
	I0914 00:52:51.350565   66801 main.go:141] libmachine: Parsing certificate...
	I0914 00:52:51.350590   66801 main.go:141] libmachine: Running pre-create checks...
	I0914 00:52:51.350607   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .PreCreateCheck
	I0914 00:52:51.350931   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:52:51.351327   66801 main.go:141] libmachine: Creating machine...
	I0914 00:52:51.351341   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .Create
	I0914 00:52:51.351507   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating KVM machine...
	I0914 00:52:51.352625   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found existing default KVM network
	I0914 00:52:51.353662   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.353505   66991 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:cd:68} reservation:<nil>}
	I0914 00:52:51.354641   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.354562   66991 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:96:2b} reservation:<nil>}
	I0914 00:52:51.355823   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.355718   66991 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380930}
	I0914 00:52:51.355857   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | created network xml: 
	I0914 00:52:51.355869   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | <network>
	I0914 00:52:51.355875   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <name>mk-old-k8s-version-431084</name>
	I0914 00:52:51.355891   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <dns enable='no'/>
	I0914 00:52:51.355895   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355902   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0914 00:52:51.355907   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     <dhcp>
	I0914 00:52:51.355916   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0914 00:52:51.355920   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |     </dhcp>
	I0914 00:52:51.355925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   </ip>
	I0914 00:52:51.355930   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG |   
	I0914 00:52:51.355937   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | </network>
	I0914 00:52:51.355944   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | 
	I0914 00:52:51.364017   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | trying to create private KVM network mk-old-k8s-version-431084 192.168.61.0/24...
	I0914 00:52:51.440773   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting up store path in /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.440805   66801 main.go:141] libmachine: (old-k8s-version-431084) Building disk image from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0914 00:52:51.440818   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | private KVM network mk-old-k8s-version-431084 192.168.61.0/24 created
	I0914 00:52:51.440831   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.440744   66991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.440852   66801 main.go:141] libmachine: (old-k8s-version-431084) Downloading /home/jenkins/minikube-integration/19640-5422/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso...
	I0914 00:52:51.735078   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.734905   66991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa...
	I0914 00:52:51.899652   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899507   66991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk...
	I0914 00:52:51.899696   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing magic tar header
	I0914 00:52:51.899714   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Writing SSH key tar header
	I0914 00:52:51.899726   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:51.899685   66991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 ...
	I0914 00:52:51.899876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084
	I0914 00:52:51.899901   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube/machines
	I0914 00:52:51.899915   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084 (perms=drwx------)
	I0914 00:52:51.899925   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:52:51.899945   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19640-5422
	I0914 00:52:51.899957   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 00:52:51.899967   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home/jenkins
	I0914 00:52:51.899975   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Checking permissions on dir: /home
	I0914 00:52:51.899988   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube/machines (perms=drwxr-xr-x)
	I0914 00:52:51.899998   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Skipping /home - not owner
	I0914 00:52:51.900017   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422/.minikube (perms=drwxr-xr-x)
	I0914 00:52:51.900030   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration/19640-5422 (perms=drwxrwxr-x)
	I0914 00:52:51.900057   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 00:52:51.900075   66801 main.go:141] libmachine: (old-k8s-version-431084) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 00:52:51.900089   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:51.901527   66801 main.go:141] libmachine: (old-k8s-version-431084) define libvirt domain using xml: 
	I0914 00:52:51.901552   66801 main.go:141] libmachine: (old-k8s-version-431084) <domain type='kvm'>
	I0914 00:52:51.901574   66801 main.go:141] libmachine: (old-k8s-version-431084)   <name>old-k8s-version-431084</name>
	I0914 00:52:51.901605   66801 main.go:141] libmachine: (old-k8s-version-431084)   <memory unit='MiB'>2200</memory>
	I0914 00:52:51.901613   66801 main.go:141] libmachine: (old-k8s-version-431084)   <vcpu>2</vcpu>
	I0914 00:52:51.901626   66801 main.go:141] libmachine: (old-k8s-version-431084)   <features>
	I0914 00:52:51.901633   66801 main.go:141] libmachine: (old-k8s-version-431084)     <acpi/>
	I0914 00:52:51.901644   66801 main.go:141] libmachine: (old-k8s-version-431084)     <apic/>
	I0914 00:52:51.901649   66801 main.go:141] libmachine: (old-k8s-version-431084)     <pae/>
	I0914 00:52:51.901656   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.901675   66801 main.go:141] libmachine: (old-k8s-version-431084)   </features>
	I0914 00:52:51.901691   66801 main.go:141] libmachine: (old-k8s-version-431084)   <cpu mode='host-passthrough'>
	I0914 00:52:51.901731   66801 main.go:141] libmachine: (old-k8s-version-431084)   
	I0914 00:52:51.901751   66801 main.go:141] libmachine: (old-k8s-version-431084)   </cpu>
	I0914 00:52:51.901761   66801 main.go:141] libmachine: (old-k8s-version-431084)   <os>
	I0914 00:52:51.901771   66801 main.go:141] libmachine: (old-k8s-version-431084)     <type>hvm</type>
	I0914 00:52:51.901780   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='cdrom'/>
	I0914 00:52:51.901786   66801 main.go:141] libmachine: (old-k8s-version-431084)     <boot dev='hd'/>
	I0914 00:52:51.901795   66801 main.go:141] libmachine: (old-k8s-version-431084)     <bootmenu enable='no'/>
	I0914 00:52:51.901800   66801 main.go:141] libmachine: (old-k8s-version-431084)   </os>
	I0914 00:52:51.901809   66801 main.go:141] libmachine: (old-k8s-version-431084)   <devices>
	I0914 00:52:51.901816   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='cdrom'>
	I0914 00:52:51.901829   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/boot2docker.iso'/>
	I0914 00:52:51.901836   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hdc' bus='scsi'/>
	I0914 00:52:51.901857   66801 main.go:141] libmachine: (old-k8s-version-431084)       <readonly/>
	I0914 00:52:51.901867   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901878   66801 main.go:141] libmachine: (old-k8s-version-431084)     <disk type='file' device='disk'>
	I0914 00:52:51.901892   66801 main.go:141] libmachine: (old-k8s-version-431084)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 00:52:51.901909   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source file='/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/old-k8s-version-431084.rawdisk'/>
	I0914 00:52:51.901920   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target dev='hda' bus='virtio'/>
	I0914 00:52:51.901931   66801 main.go:141] libmachine: (old-k8s-version-431084)     </disk>
	I0914 00:52:51.901943   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901957   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='mk-old-k8s-version-431084'/>
	I0914 00:52:51.901966   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.901975   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.901982   66801 main.go:141] libmachine: (old-k8s-version-431084)     <interface type='network'>
	I0914 00:52:51.901994   66801 main.go:141] libmachine: (old-k8s-version-431084)       <source network='default'/>
	I0914 00:52:51.902000   66801 main.go:141] libmachine: (old-k8s-version-431084)       <model type='virtio'/>
	I0914 00:52:51.902010   66801 main.go:141] libmachine: (old-k8s-version-431084)     </interface>
	I0914 00:52:51.902021   66801 main.go:141] libmachine: (old-k8s-version-431084)     <serial type='pty'>
	I0914 00:52:51.902033   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target port='0'/>
	I0914 00:52:51.902040   66801 main.go:141] libmachine: (old-k8s-version-431084)     </serial>
	I0914 00:52:51.902052   66801 main.go:141] libmachine: (old-k8s-version-431084)     <console type='pty'>
	I0914 00:52:51.902062   66801 main.go:141] libmachine: (old-k8s-version-431084)       <target type='serial' port='0'/>
	I0914 00:52:51.902072   66801 main.go:141] libmachine: (old-k8s-version-431084)     </console>
	I0914 00:52:51.902081   66801 main.go:141] libmachine: (old-k8s-version-431084)     <rng model='virtio'>
	I0914 00:52:51.902091   66801 main.go:141] libmachine: (old-k8s-version-431084)       <backend model='random'>/dev/random</backend>
	I0914 00:52:51.902100   66801 main.go:141] libmachine: (old-k8s-version-431084)     </rng>
	I0914 00:52:51.902107   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902116   66801 main.go:141] libmachine: (old-k8s-version-431084)     
	I0914 00:52:51.902133   66801 main.go:141] libmachine: (old-k8s-version-431084)   </devices>
	I0914 00:52:51.902144   66801 main.go:141] libmachine: (old-k8s-version-431084) </domain>
	I0914 00:52:51.902155   66801 main.go:141] libmachine: (old-k8s-version-431084) 
	I0914 00:52:51.906817   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:63:e1:fc in network default
	I0914 00:52:51.907735   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 00:52:51.907769   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:51.908690   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 00:52:51.909010   66801 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 00:52:51.909570   66801 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 00:52:51.910517   66801 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 00:52:53.472296   66801 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 00:52:53.473458   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.474119   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.474172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.474109   66991 retry.go:31] will retry after 277.653713ms: waiting for machine to come up
	I0914 00:52:53.753876   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:53.755354   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:53.755382   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:53.755255   66991 retry.go:31] will retry after 372.557708ms: waiting for machine to come up
	I0914 00:52:54.129933   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.130551   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.130578   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.130504   66991 retry.go:31] will retry after 329.217104ms: waiting for machine to come up
	I0914 00:52:54.461115   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.461742   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.461767   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.461660   66991 retry.go:31] will retry after 534.468325ms: waiting for machine to come up
	I0914 00:52:54.998338   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:54.999189   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:54.999215   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:54.999096   66991 retry.go:31] will retry after 529.424126ms: waiting for machine to come up
	I0914 00:52:55.529670   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:55.530157   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:55.530193   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:55.530103   66991 retry.go:31] will retry after 701.848536ms: waiting for machine to come up
	I0914 00:52:56.234175   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:56.234644   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:56.234675   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:56.234584   66991 retry.go:31] will retry after 926.106578ms: waiting for machine to come up
	I0914 00:52:57.162172   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:57.162686   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:57.162715   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:57.162647   66991 retry.go:31] will retry after 1.270446243s: waiting for machine to come up
	I0914 00:52:58.435104   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:58.435636   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:58.435665   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:58.435587   66991 retry.go:31] will retry after 1.16744392s: waiting for machine to come up
	I0914 00:52:59.604970   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:52:59.605514   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:52:59.605541   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:52:59.605457   66991 retry.go:31] will retry after 1.768720127s: waiting for machine to come up
	I0914 00:53:01.375890   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:01.376460   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:01.376502   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:01.376418   66991 retry.go:31] will retry after 2.152913439s: waiting for machine to come up
	I0914 00:53:03.530645   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:03.531243   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:03.531267   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:03.531195   66991 retry.go:31] will retry after 2.194352636s: waiting for machine to come up
	I0914 00:53:05.728371   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:05.728822   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:05.728843   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:05.728779   66991 retry.go:31] will retry after 3.501013157s: waiting for machine to come up
	I0914 00:53:09.231390   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:09.232039   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 00:53:09.232061   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 00:53:09.231992   66991 retry.go:31] will retry after 4.974590479s: waiting for machine to come up
	I0914 00:53:14.209543   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.210118   66801 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 00:53:14.210146   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.210154   66801 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 00:53:14.210593   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084
	I0914 00:53:14.296971   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 00:53:14.296997   66801 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 00:53:14.297027   66801 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 00:53:14.302432   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.302836   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.302946   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.303093   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 00:53:14.303124   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 00:53:14.303154   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 00:53:14.303167   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 00:53:14.303184   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 00:53:14.431746   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 00:53:14.432030   66801 main.go:141] libmachine: (old-k8s-version-431084) KVM machine creation complete!
	I0914 00:53:14.432386   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:53:14.432983   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:14.433194   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:14.433352   66801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 00:53:14.433368   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 00:53:14.434702   66801 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 00:53:14.434718   66801 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 00:53:14.434725   66801 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 00:53:14.434733   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.436931   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.437354   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.437386   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.437454   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.437635   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.437770   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.437923   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.438141   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.438373   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.438387   66801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 00:53:14.543589   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:53:14.543616   66801 main.go:141] libmachine: Detecting the provisioner...
	I0914 00:53:14.543627   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.546738   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.547137   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.547170   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.547318   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.547498   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.547675   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.547822   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.547957   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.548166   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.548178   66801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 00:53:14.649639   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 00:53:14.649735   66801 main.go:141] libmachine: found compatible host: buildroot
	I0914 00:53:14.649745   66801 main.go:141] libmachine: Provisioning with buildroot...
	I0914 00:53:14.649755   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.650015   66801 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 00:53:14.650053   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.650225   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.653638   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.654117   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.654141   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.654349   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.654544   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.654724   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.654888   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.655069   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.655291   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.655303   66801 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 00:53:14.775286   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 00:53:14.775327   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.778779   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.779250   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.779282   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.779593   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:14.779844   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.780078   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:14.780275   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:14.780542   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:14.780766   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:14.780793   66801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:53:14.901057   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:53:14.901087   66801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 00:53:14.901110   66801 buildroot.go:174] setting up certificates
	I0914 00:53:14.901122   66801 provision.go:84] configureAuth start
	I0914 00:53:14.901132   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 00:53:14.901438   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:14.904596   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.905161   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.905192   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.905360   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:14.907972   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.908410   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:14.908440   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:14.908581   66801 provision.go:143] copyHostCerts
	I0914 00:53:14.908639   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 00:53:14.908649   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 00:53:14.908717   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 00:53:14.908801   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 00:53:14.908811   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 00:53:14.908841   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 00:53:14.908916   66801 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 00:53:14.908926   66801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 00:53:14.908958   66801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 00:53:14.909027   66801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 00:53:15.268390   66801 provision.go:177] copyRemoteCerts
	I0914 00:53:15.268453   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:53:15.268476   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.271577   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.272037   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.272068   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.272265   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.272502   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.272693   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.272826   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.353915   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:53:15.378403   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 00:53:15.404566   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:53:15.434293   66801 provision.go:87] duration metric: took 533.159761ms to configureAuth
	I0914 00:53:15.434323   66801 buildroot.go:189] setting minikube options for container-runtime
	I0914 00:53:15.434495   66801 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 00:53:15.434609   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.437445   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.437919   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.437943   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.438119   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.438318   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.438489   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.438658   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.438839   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:15.439095   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:15.439112   66801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:53:15.679700   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:53:15.679730   66801 main.go:141] libmachine: Checking connection to Docker...
	I0914 00:53:15.679738   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetURL
	I0914 00:53:15.680936   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using libvirt version 6000000
	I0914 00:53:15.683775   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.684363   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.684396   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.684581   66801 main.go:141] libmachine: Docker is up and running!
	I0914 00:53:15.684597   66801 main.go:141] libmachine: Reticulating splines...
	I0914 00:53:15.684632   66801 client.go:171] duration metric: took 24.334249123s to LocalClient.Create
	I0914 00:53:15.684663   66801 start.go:167] duration metric: took 24.334349462s to libmachine.API.Create "old-k8s-version-431084"
	I0914 00:53:15.684675   66801 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 00:53:15.684691   66801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:53:15.684716   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.684975   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:53:15.684997   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.687573   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.688050   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.688079   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.688236   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.688404   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.688524   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.688623   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.772846   66801 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:53:15.778340   66801 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 00:53:15.778364   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 00:53:15.778428   66801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 00:53:15.778535   66801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 00:53:15.778664   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 00:53:15.791458   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:53:15.825391   66801 start.go:296] duration metric: took 140.700059ms for postStartSetup
	I0914 00:53:15.825448   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 00:53:15.826222   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:15.829799   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.830385   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.830407   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.830843   66801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:53:15.831111   66801 start.go:128] duration metric: took 24.502405524s to createHost
	I0914 00:53:15.831137   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.834298   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.834686   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.834720   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.834972   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.835229   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.835381   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.835509   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.835830   66801 main.go:141] libmachine: Using SSH client type: native
	I0914 00:53:15.836040   66801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 00:53:15.836055   66801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 00:53:15.944516   66801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275195.900159706
	
	I0914 00:53:15.944538   66801 fix.go:216] guest clock: 1726275195.900159706
	I0914 00:53:15.944546   66801 fix.go:229] Guest: 2024-09-14 00:53:15.900159706 +0000 UTC Remote: 2024-09-14 00:53:15.831122568 +0000 UTC m=+40.152976203 (delta=69.037138ms)
	I0914 00:53:15.944569   66801 fix.go:200] guest clock delta is within tolerance: 69.037138ms
	I0914 00:53:15.944575   66801 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 24.616036705s
	I0914 00:53:15.944597   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.944866   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:15.947649   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.948084   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.948128   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.948304   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.948809   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.949030   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:53:15.949134   66801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:53:15.949196   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.949228   66801 ssh_runner.go:195] Run: cat /version.json
	I0914 00:53:15.949253   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 00:53:15.951945   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952016   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952322   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.952347   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952376   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:15.952394   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:15.952556   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.952725   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 00:53:15.952736   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.952917   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.952922   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 00:53:15.953076   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 00:53:15.953078   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:15.953250   66801 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 00:53:16.028572   66801 ssh_runner.go:195] Run: systemctl --version
	I0914 00:53:16.068487   66801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:53:16.244562   66801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 00:53:16.251813   66801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 00:53:16.251881   66801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:53:16.272029   66801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 00:53:16.272060   66801 start.go:495] detecting cgroup driver to use...
	I0914 00:53:16.272133   66801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:53:16.290364   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:53:16.306421   66801 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:53:16.306490   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:53:16.321586   66801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:53:16.340840   66801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:53:16.474617   66801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:53:16.627816   66801 docker.go:233] disabling docker service ...
	I0914 00:53:16.627890   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:53:16.645746   66801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:53:16.664121   66801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:53:16.821046   66801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:53:16.976125   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:53:16.994901   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:53:17.021769   66801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 00:53:17.021830   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.035707   66801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:53:17.035799   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.050119   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.079610   66801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:53:17.097149   66801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:53:17.114899   66801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:53:17.128149   66801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 00:53:17.128206   66801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 00:53:17.143819   66801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:53:17.155977   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:17.295241   66801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:53:17.407809   66801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:53:17.407879   66801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:53:17.413233   66801 start.go:563] Will wait 60s for crictl version
	I0914 00:53:17.413299   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:17.417011   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:53:17.458437   66801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 00:53:17.458536   66801 ssh_runner.go:195] Run: crio --version
	I0914 00:53:17.493914   66801 ssh_runner.go:195] Run: crio --version
	I0914 00:53:17.537043   66801 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 00:53:17.538264   66801 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 00:53:17.541556   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:17.542107   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 01:53:07 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 00:53:17.542126   66801 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 00:53:17.542374   66801 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 00:53:17.547035   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:53:17.569831   66801 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:53:17.569959   66801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:53:17.570030   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:53:17.610786   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 00:53:17.610856   66801 ssh_runner.go:195] Run: which lz4
	I0914 00:53:17.614959   66801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 00:53:17.619319   66801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 00:53:17.619345   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 00:53:19.266781   66801 crio.go:462] duration metric: took 1.651866037s to copy over tarball
	I0914 00:53:19.266853   66801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 00:53:22.113624   66801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.846743817s)
	I0914 00:53:22.113654   66801 crio.go:469] duration metric: took 2.84684398s to extract the tarball
	I0914 00:53:22.113664   66801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 00:53:22.169862   66801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:53:22.216996   66801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 00:53:22.217019   66801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 00:53:22.217087   66801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.217108   66801 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.217123   66801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.217135   66801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.217114   66801 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.217154   66801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.217138   66801 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 00:53:22.217090   66801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:53:22.218335   66801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.218346   66801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.218362   66801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.218418   66801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.218436   66801 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 00:53:22.218503   66801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:53:22.218666   66801 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.219092   66801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.409473   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.444983   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 00:53:22.449915   66801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 00:53:22.449970   66801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.450015   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.451772   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.463997   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.465074   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.479336   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.479520   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.536368   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.536446   66801 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 00:53:22.536483   66801 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 00:53:22.536512   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.576727   66801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 00:53:22.576772   66801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.576820   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.606260   66801 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 00:53:22.606311   66801 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.606345   66801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 00:53:22.606360   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.606382   66801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.606436   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.628611   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:53:22.628614   66801 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 00:53:22.628702   66801 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.628729   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.628777   66801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 00:53:22.628810   66801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.628852   66801 ssh_runner.go:195] Run: which crictl
	I0914 00:53:22.637937   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.637968   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.637969   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.638025   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.737394   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:53:22.737401   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.737465   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.769414   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.769467   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 00:53:22.769493   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.769558   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.829296   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 00:53:22.842571   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:22.862166   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:22.912533   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 00:53:22.933476   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 00:53:22.942786   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 00:53:22.942798   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 00:53:22.985826   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 00:53:23.006419   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 00:53:23.006466   66801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 00:53:23.038135   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 00:53:23.056930   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 00:53:23.057029   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 00:53:23.093983   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 00:53:23.094030   66801 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 00:53:23.455902   66801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:53:23.606806   66801 cache_images.go:92] duration metric: took 1.389768051s to LoadCachedImages
	W0914 00:53:23.606896   66801 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 00:53:23.606915   66801 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 00:53:23.607021   66801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:53:23.607100   66801 ssh_runner.go:195] Run: crio config
	I0914 00:53:23.667700   66801 cni.go:84] Creating CNI manager for ""
	I0914 00:53:23.667721   66801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:53:23.667733   66801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:53:23.667756   66801 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 00:53:23.667976   66801 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:53:23.668111   66801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 00:53:23.680313   66801 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:53:23.680388   66801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:53:23.693600   66801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 00:53:23.713803   66801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:53:23.736024   66801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 00:53:23.755636   66801 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 00:53:23.761370   66801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:53:23.776434   66801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:53:23.944035   66801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:53:23.966368   66801 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 00:53:23.966395   66801 certs.go:194] generating shared ca certs ...
	I0914 00:53:23.966415   66801 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:23.966576   66801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 00:53:23.966653   66801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 00:53:23.966667   66801 certs.go:256] generating profile certs ...
	I0914 00:53:23.966741   66801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 00:53:23.966768   66801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.crt with IP's: []
	I0914 00:53:24.120159   66801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.crt ...
	I0914 00:53:24.120188   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.crt: {Name:mk49c8d6c396579b13baab22399875d785c65936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.120387   66801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key ...
	I0914 00:53:24.120402   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key: {Name:mka2977400ee278174d4611322de85cc87aeaa73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.120510   66801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 00:53:24.120528   66801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt.58151014 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.116]
	I0914 00:53:24.464529   66801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt.58151014 ...
	I0914 00:53:24.464564   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt.58151014: {Name:mk8a6068d95f03b12e97de0f75276866bd4c8f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.464773   66801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014 ...
	I0914 00:53:24.464792   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014: {Name:mkd40b36d02760ed0a7ee76aab5345f5195e5dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.464873   66801 certs.go:381] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt.58151014 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt
	I0914 00:53:24.464946   66801 certs.go:385] copying /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014 -> /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key
	I0914 00:53:24.465013   66801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 00:53:24.465039   66801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt with IP's: []
	I0914 00:53:24.707592   66801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt ...
	I0914 00:53:24.707619   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt: {Name:mk1a917c26bbc8383b2b45a41879deb3cc256dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.707808   66801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key ...
	I0914 00:53:24.707821   66801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key: {Name:mkcec9ce1695d95454cff87ef2327200147f2bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:53:24.707992   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 00:53:24.708028   66801 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 00:53:24.708036   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 00:53:24.708057   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:53:24.708081   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:53:24.708103   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 00:53:24.708139   66801 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 00:53:24.708688   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:53:24.736982   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:53:24.763532   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:53:24.790323   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:53:24.816568   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 00:53:24.848280   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:53:24.880587   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:53:24.909658   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:53:24.943127   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 00:53:24.976203   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:53:25.003887   66801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 00:53:25.035242   66801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:53:25.055077   66801 ssh_runner.go:195] Run: openssl version
	I0914 00:53:25.061385   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:53:25.076435   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:53:25.082261   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:53:25.082324   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:53:25.090183   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:53:25.104791   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 00:53:25.119111   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 00:53:25.123430   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 00:53:25.123494   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 00:53:25.129328   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 00:53:25.141067   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 00:53:25.152828   66801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 00:53:25.157747   66801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 00:53:25.157862   66801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 00:53:25.163642   66801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 00:53:25.175425   66801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:53:25.179581   66801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:53:25.179638   66801 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:53:25.179718   66801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:53:25.179757   66801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:53:25.217030   66801 cri.go:89] found id: ""
	I0914 00:53:25.217113   66801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:53:25.228176   66801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:53:25.238524   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:53:25.248851   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:53:25.248869   66801 kubeadm.go:157] found existing configuration files:
	
	I0914 00:53:25.248917   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:53:25.259121   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:53:25.259183   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:53:25.269124   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:53:25.279421   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:53:25.279490   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:53:25.292114   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:53:25.301654   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:53:25.301721   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:53:25.314315   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:53:25.326606   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:53:25.326672   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:53:25.339753   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:53:25.461382   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 00:53:25.461487   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:53:25.638167   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:53:25.638324   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:53:25.638453   66801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 00:53:25.871162   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:53:25.872971   66801 out.go:235]   - Generating certificates and keys ...
	I0914 00:53:25.873083   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:53:25.873172   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:53:26.081705   66801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:53:26.242468   66801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:53:26.388827   66801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:53:26.584326   66801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:53:26.695148   66801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:53:26.695318   66801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	I0914 00:53:26.901858   66801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:53:26.902520   66801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	I0914 00:53:27.359477   66801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:53:27.502106   66801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:53:27.813722   66801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:53:27.814203   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:53:28.200454   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:53:28.298359   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:53:28.355702   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:53:28.447278   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:53:28.464199   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:53:28.466317   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:53:28.466413   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:53:28.651405   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:53:28.653179   66801 out.go:235]   - Booting up control plane ...
	I0914 00:53:28.653312   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:53:28.663498   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:53:28.664930   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:53:28.666058   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:53:28.673914   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 00:54:08.638893   66801 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 00:54:08.639364   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:54:08.639590   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:54:13.638946   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:54:13.639214   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:54:23.638799   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:54:23.639114   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:54:43.638821   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:54:43.639123   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:55:23.637970   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:55:23.638244   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:55:23.638267   66801 kubeadm.go:310] 
	I0914 00:55:23.638347   66801 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 00:55:23.638418   66801 kubeadm.go:310] 		timed out waiting for the condition
	I0914 00:55:23.638429   66801 kubeadm.go:310] 
	I0914 00:55:23.638472   66801 kubeadm.go:310] 	This error is likely caused by:
	I0914 00:55:23.638526   66801 kubeadm.go:310] 		- The kubelet is not running
	I0914 00:55:23.638679   66801 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 00:55:23.638692   66801 kubeadm.go:310] 
	I0914 00:55:23.638841   66801 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 00:55:23.638911   66801 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 00:55:23.638964   66801 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 00:55:23.638975   66801 kubeadm.go:310] 
	I0914 00:55:23.639126   66801 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 00:55:23.639237   66801 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 00:55:23.639250   66801 kubeadm.go:310] 
	I0914 00:55:23.639409   66801 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 00:55:23.639502   66801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 00:55:23.639581   66801 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 00:55:23.639685   66801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 00:55:23.639696   66801 kubeadm.go:310] 
	I0914 00:55:23.640119   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:55:23.640327   66801 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 00:55:23.640411   66801 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 00:55:23.640556   66801 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-431084] and IPs [192.168.61.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 00:55:23.640599   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 00:55:24.098996   66801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:55:24.113385   66801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:55:24.125029   66801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:55:24.125048   66801 kubeadm.go:157] found existing configuration files:
	
	I0914 00:55:24.125098   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:55:24.136295   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:55:24.136388   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:55:24.147455   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:55:24.159487   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:55:24.159548   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:55:24.172162   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:55:24.183902   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:55:24.183975   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:55:24.194274   66801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:55:24.202836   66801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:55:24.202908   66801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:55:24.211583   66801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 00:55:24.275667   66801 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 00:55:24.275740   66801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:55:24.422355   66801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:55:24.422473   66801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:55:24.422587   66801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 00:55:24.603282   66801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:55:24.605160   66801 out.go:235]   - Generating certificates and keys ...
	I0914 00:55:24.605264   66801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:55:24.605351   66801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:55:24.605454   66801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 00:55:24.605533   66801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 00:55:24.605633   66801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 00:55:24.605691   66801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 00:55:24.605745   66801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 00:55:24.605810   66801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 00:55:24.605906   66801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 00:55:24.605982   66801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 00:55:24.606017   66801 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 00:55:24.606074   66801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:55:24.772320   66801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:55:24.860076   66801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:55:24.914546   66801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:55:25.061341   66801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:55:25.087975   66801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:55:25.088111   66801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:55:25.088148   66801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:55:25.239301   66801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:55:25.241140   66801 out.go:235]   - Booting up control plane ...
	I0914 00:55:25.241279   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:55:25.244423   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:55:25.253381   66801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:55:25.254389   66801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:55:25.257450   66801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 00:56:05.259917   66801 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 00:56:05.260042   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:56:05.260332   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:56:10.260969   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:56:10.261262   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:56:20.261794   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:56:20.262025   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:56:40.263705   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:56:40.263946   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:57:20.262706   66801 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 00:57:20.262973   66801 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 00:57:20.262988   66801 kubeadm.go:310] 
	I0914 00:57:20.263048   66801 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 00:57:20.263104   66801 kubeadm.go:310] 		timed out waiting for the condition
	I0914 00:57:20.263126   66801 kubeadm.go:310] 
	I0914 00:57:20.263167   66801 kubeadm.go:310] 	This error is likely caused by:
	I0914 00:57:20.263207   66801 kubeadm.go:310] 		- The kubelet is not running
	I0914 00:57:20.263299   66801 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 00:57:20.263306   66801 kubeadm.go:310] 
	I0914 00:57:20.263397   66801 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 00:57:20.263448   66801 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 00:57:20.263494   66801 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 00:57:20.263516   66801 kubeadm.go:310] 
	I0914 00:57:20.263632   66801 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 00:57:20.263731   66801 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 00:57:20.263749   66801 kubeadm.go:310] 
	I0914 00:57:20.263892   66801 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 00:57:20.263985   66801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 00:57:20.264084   66801 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 00:57:20.264179   66801 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 00:57:20.264188   66801 kubeadm.go:310] 
	I0914 00:57:20.265123   66801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:57:20.265205   66801 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 00:57:20.265265   66801 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 00:57:20.265332   66801 kubeadm.go:394] duration metric: took 3m55.085697162s to StartCluster
	I0914 00:57:20.265368   66801 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:57:20.265415   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:57:20.306070   66801 cri.go:89] found id: ""
	I0914 00:57:20.306105   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.306113   66801 logs.go:278] No container was found matching "kube-apiserver"
	I0914 00:57:20.306119   66801 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:57:20.306175   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:57:20.338931   66801 cri.go:89] found id: ""
	I0914 00:57:20.338962   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.338973   66801 logs.go:278] No container was found matching "etcd"
	I0914 00:57:20.338982   66801 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:57:20.339036   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:57:20.372786   66801 cri.go:89] found id: ""
	I0914 00:57:20.372812   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.372820   66801 logs.go:278] No container was found matching "coredns"
	I0914 00:57:20.372826   66801 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:57:20.372885   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:57:20.404358   66801 cri.go:89] found id: ""
	I0914 00:57:20.404381   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.404390   66801 logs.go:278] No container was found matching "kube-scheduler"
	I0914 00:57:20.404397   66801 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:57:20.404453   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:57:20.436926   66801 cri.go:89] found id: ""
	I0914 00:57:20.436953   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.436961   66801 logs.go:278] No container was found matching "kube-proxy"
	I0914 00:57:20.436966   66801 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:57:20.437014   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:57:20.469703   66801 cri.go:89] found id: ""
	I0914 00:57:20.469729   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.469737   66801 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 00:57:20.469742   66801 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:57:20.469807   66801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:57:20.505821   66801 cri.go:89] found id: ""
	I0914 00:57:20.505853   66801 logs.go:276] 0 containers: []
	W0914 00:57:20.505861   66801 logs.go:278] No container was found matching "kindnet"
	I0914 00:57:20.505870   66801 logs.go:123] Gathering logs for kubelet ...
	I0914 00:57:20.505888   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 00:57:20.556330   66801 logs.go:123] Gathering logs for dmesg ...
	I0914 00:57:20.556366   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:57:20.569637   66801 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:57:20.569661   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 00:57:20.678176   66801 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 00:57:20.678200   66801 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:57:20.678212   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:57:20.790946   66801 logs.go:123] Gathering logs for container status ...
	I0914 00:57:20.790983   66801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 00:57:20.826938   66801 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 00:57:20.827011   66801 out.go:270] * 
	* 
	W0914 00:57:20.827081   66801 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 00:57:20.827101   66801 out.go:270] * 
	* 
	W0914 00:57:20.828011   66801 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:57:20.830648   66801 out.go:201] 
	W0914 00:57:20.831659   66801 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 00:57:20.831701   66801 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 00:57:20.831730   66801 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 00:57:20.833050   66801 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 6 (218.81326ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:21.094632   73145 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-431084" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (285.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-754332 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-754332 --alsologtostderr -v=3: exit status 82 (2m0.524430012s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-754332"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:55:05.852140   71414 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:55:05.852358   71414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:55:05.852387   71414 out.go:358] Setting ErrFile to fd 2...
	I0914 00:55:05.852396   71414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:55:05.852596   71414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:55:05.852871   71414 out.go:352] Setting JSON to false
	I0914 00:55:05.852966   71414 mustload.go:65] Loading cluster: default-k8s-diff-port-754332
	I0914 00:55:05.853340   71414 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:55:05.853416   71414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 00:55:05.853608   71414 mustload.go:65] Loading cluster: default-k8s-diff-port-754332
	I0914 00:55:05.853743   71414 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:55:05.853776   71414 stop.go:39] StopHost: default-k8s-diff-port-754332
	I0914 00:55:05.854178   71414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:55:05.854223   71414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:55:05.869708   71414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0914 00:55:05.870209   71414 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:55:05.870849   71414 main.go:141] libmachine: Using API Version  1
	I0914 00:55:05.870884   71414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:55:05.871299   71414 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:55:05.873548   71414 out.go:177] * Stopping node "default-k8s-diff-port-754332"  ...
	I0914 00:55:05.874935   71414 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:55:05.874961   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 00:55:05.875200   71414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:55:05.875230   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 00:55:05.878032   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 00:55:05.878475   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 01:54:07 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 00:55:05.878498   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 00:55:05.878631   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 00:55:05.878807   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 00:55:05.878951   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 00:55:05.879110   71414 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 00:55:05.969250   71414 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:55:06.025158   71414 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:55:06.083374   71414 main.go:141] libmachine: Stopping "default-k8s-diff-port-754332"...
	I0914 00:55:06.083411   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 00:55:06.084970   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Stop
	I0914 00:55:06.088496   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 0/120
	I0914 00:55:07.090616   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 1/120
	I0914 00:55:08.092270   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 2/120
	I0914 00:55:09.093865   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 3/120
	I0914 00:55:10.095061   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 4/120
	I0914 00:55:11.097194   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 5/120
	I0914 00:55:12.098736   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 6/120
	I0914 00:55:13.100029   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 7/120
	I0914 00:55:14.102372   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 8/120
	I0914 00:55:15.103705   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 9/120
	I0914 00:55:16.105167   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 10/120
	I0914 00:55:17.106552   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 11/120
	I0914 00:55:18.108003   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 12/120
	I0914 00:55:19.109536   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 13/120
	I0914 00:55:20.111075   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 14/120
	I0914 00:55:21.113274   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 15/120
	I0914 00:55:22.114761   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 16/120
	I0914 00:55:23.116281   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 17/120
	I0914 00:55:24.118554   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 18/120
	I0914 00:55:25.120162   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 19/120
	I0914 00:55:26.122202   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 20/120
	I0914 00:55:27.123797   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 21/120
	I0914 00:55:28.125212   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 22/120
	I0914 00:55:29.126735   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 23/120
	I0914 00:55:30.128376   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 24/120
	I0914 00:55:31.130416   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 25/120
	I0914 00:55:32.131686   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 26/120
	I0914 00:55:33.133152   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 27/120
	I0914 00:55:34.134780   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 28/120
	I0914 00:55:35.136465   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 29/120
	I0914 00:55:36.137852   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 30/120
	I0914 00:55:37.139336   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 31/120
	I0914 00:55:38.141068   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 32/120
	I0914 00:55:39.142573   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 33/120
	I0914 00:55:40.144238   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 34/120
	I0914 00:55:41.146384   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 35/120
	I0914 00:55:42.148950   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 36/120
	I0914 00:55:43.150223   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 37/120
	I0914 00:55:44.152772   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 38/120
	I0914 00:55:45.154121   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 39/120
	I0914 00:55:46.156262   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 40/120
	I0914 00:55:47.158184   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 41/120
	I0914 00:55:48.159628   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 42/120
	I0914 00:55:49.161158   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 43/120
	I0914 00:55:50.162790   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 44/120
	I0914 00:55:51.165016   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 45/120
	I0914 00:55:52.166576   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 46/120
	I0914 00:55:53.167942   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 47/120
	I0914 00:55:54.169809   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 48/120
	I0914 00:55:55.171373   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 49/120
	I0914 00:55:56.173752   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 50/120
	I0914 00:55:57.175024   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 51/120
	I0914 00:55:58.176485   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 52/120
	I0914 00:55:59.178180   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 53/120
	I0914 00:56:00.206682   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 54/120
	I0914 00:56:01.208432   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 55/120
	I0914 00:56:02.209955   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 56/120
	I0914 00:56:03.211558   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 57/120
	I0914 00:56:04.213240   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 58/120
	I0914 00:56:05.214759   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 59/120
	I0914 00:56:06.217430   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 60/120
	I0914 00:56:07.219123   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 61/120
	I0914 00:56:08.221264   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 62/120
	I0914 00:56:09.222778   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 63/120
	I0914 00:56:10.224261   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 64/120
	I0914 00:56:11.226396   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 65/120
	I0914 00:56:12.228281   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 66/120
	I0914 00:56:13.229695   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 67/120
	I0914 00:56:14.231353   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 68/120
	I0914 00:56:15.232992   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 69/120
	I0914 00:56:16.235302   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 70/120
	I0914 00:56:17.237085   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 71/120
	I0914 00:56:18.238478   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 72/120
	I0914 00:56:19.239956   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 73/120
	I0914 00:56:20.241500   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 74/120
	I0914 00:56:21.244312   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 75/120
	I0914 00:56:22.246531   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 76/120
	I0914 00:56:23.248036   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 77/120
	I0914 00:56:24.250803   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 78/120
	I0914 00:56:25.252530   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 79/120
	I0914 00:56:26.254046   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 80/120
	I0914 00:56:27.255900   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 81/120
	I0914 00:56:28.257662   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 82/120
	I0914 00:56:29.259143   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 83/120
	I0914 00:56:30.260869   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 84/120
	I0914 00:56:31.263001   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 85/120
	I0914 00:56:32.264658   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 86/120
	I0914 00:56:33.266764   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 87/120
	I0914 00:56:34.268743   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 88/120
	I0914 00:56:35.270078   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 89/120
	I0914 00:56:36.272306   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 90/120
	I0914 00:56:37.274306   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 91/120
	I0914 00:56:38.275995   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 92/120
	I0914 00:56:39.277792   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 93/120
	I0914 00:56:40.279051   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 94/120
	I0914 00:56:41.281105   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 95/120
	I0914 00:56:42.282733   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 96/120
	I0914 00:56:43.284110   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 97/120
	I0914 00:56:44.286564   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 98/120
	I0914 00:56:45.288035   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 99/120
	I0914 00:56:46.289808   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 100/120
	I0914 00:56:47.291546   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 101/120
	I0914 00:56:48.293013   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 102/120
	I0914 00:56:49.294401   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 103/120
	I0914 00:56:50.296181   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 104/120
	I0914 00:56:51.298361   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 105/120
	I0914 00:56:52.299965   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 106/120
	I0914 00:56:53.301626   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 107/120
	I0914 00:56:54.303102   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 108/120
	I0914 00:56:55.304815   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 109/120
	I0914 00:56:56.307276   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 110/120
	I0914 00:56:57.309105   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 111/120
	I0914 00:56:58.310304   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 112/120
	I0914 00:56:59.312083   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 113/120
	I0914 00:57:00.314267   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 114/120
	I0914 00:57:01.316672   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 115/120
	I0914 00:57:02.318188   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 116/120
	I0914 00:57:03.320116   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 117/120
	I0914 00:57:04.322774   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 118/120
	I0914 00:57:05.324229   71414 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for machine to stop 119/120
	I0914 00:57:06.324816   71414 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 00:57:06.324886   71414 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 00:57:06.326584   71414 out.go:201] 
	W0914 00:57:06.327668   71414 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 00:57:06.327682   71414 out.go:270] * 
	* 
	W0914 00:57:06.330186   71414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:57:06.331484   71414 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-754332 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
E0914 00:57:06.758206   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:06.865810   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:06.872173   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:06.883583   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:06.904984   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:06.946491   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:07.028171   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:07.189725   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:07.511980   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:08.154244   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:08.409156   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:09.435536   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:11.997646   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:17.119494   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:18.651326   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:20.623986   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332: exit status 3 (18.607490421s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:24.940177   73064 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E0914 00:57:24.940205   73064 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-754332" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-057857 --alsologtostderr -v=3
E0914 00:55:21.924803   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:32.167072   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:52.648868   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:54.609955   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-057857 --alsologtostderr -v=3: exit status 82 (2m0.830967608s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-057857"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:55:20.574849   71756 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:55:20.575183   71756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:55:20.575196   71756 out.go:358] Setting ErrFile to fd 2...
	I0914 00:55:20.575203   71756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:55:20.576416   71756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:55:20.577191   71756 out.go:352] Setting JSON to false
	I0914 00:55:20.577314   71756 mustload.go:65] Loading cluster: no-preload-057857
	I0914 00:55:20.577734   71756 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:55:20.577815   71756 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 00:55:20.577999   71756 mustload.go:65] Loading cluster: no-preload-057857
	I0914 00:55:20.578118   71756 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:55:20.578150   71756 stop.go:39] StopHost: no-preload-057857
	I0914 00:55:20.578614   71756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:55:20.578652   71756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:55:20.593888   71756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0914 00:55:20.594392   71756 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:55:20.594927   71756 main.go:141] libmachine: Using API Version  1
	I0914 00:55:20.594947   71756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:55:20.595420   71756 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:55:20.597758   71756 out.go:177] * Stopping node "no-preload-057857"  ...
	I0914 00:55:20.598882   71756 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:55:20.598933   71756 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 00:55:20.599169   71756 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:55:20.599192   71756 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 00:55:20.602330   71756 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 00:55:20.604287   71756 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 01:53:37 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 00:55:20.604314   71756 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 00:55:20.604430   71756 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 00:55:20.604582   71756 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 00:55:20.604709   71756 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 00:55:20.604824   71756 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 00:55:20.703706   71756 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:55:20.761594   71756 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:55:20.821841   71756 main.go:141] libmachine: Stopping "no-preload-057857"...
	I0914 00:55:20.821871   71756 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 00:55:20.823922   71756 main.go:141] libmachine: (no-preload-057857) Calling .Stop
	I0914 00:55:20.827826   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 0/120
	I0914 00:55:21.829402   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 1/120
	I0914 00:55:22.831097   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 2/120
	I0914 00:55:23.832851   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 3/120
	I0914 00:55:24.834268   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 4/120
	I0914 00:55:25.836037   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 5/120
	I0914 00:55:26.837553   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 6/120
	I0914 00:55:27.838835   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 7/120
	I0914 00:55:28.840506   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 8/120
	I0914 00:55:29.842876   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 9/120
	I0914 00:55:30.844813   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 10/120
	I0914 00:55:31.846230   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 11/120
	I0914 00:55:32.847628   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 12/120
	I0914 00:55:33.849132   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 13/120
	I0914 00:55:34.850702   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 14/120
	I0914 00:55:35.853002   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 15/120
	I0914 00:55:36.854474   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 16/120
	I0914 00:55:37.855857   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 17/120
	I0914 00:55:38.857291   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 18/120
	I0914 00:55:39.858528   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 19/120
	I0914 00:55:40.860909   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 20/120
	I0914 00:55:41.862711   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 21/120
	I0914 00:55:42.864332   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 22/120
	I0914 00:55:43.866925   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 23/120
	I0914 00:55:44.868299   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 24/120
	I0914 00:55:45.870414   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 25/120
	I0914 00:55:46.871662   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 26/120
	I0914 00:55:47.873113   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 27/120
	I0914 00:55:48.874395   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 28/120
	I0914 00:55:49.876187   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 29/120
	I0914 00:55:50.878749   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 30/120
	I0914 00:55:51.880165   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 31/120
	I0914 00:55:52.881405   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 32/120
	I0914 00:55:53.882952   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 33/120
	I0914 00:55:54.884262   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 34/120
	I0914 00:55:55.886459   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 35/120
	I0914 00:55:56.887939   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 36/120
	I0914 00:55:57.890766   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 37/120
	I0914 00:55:58.892225   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 38/120
	I0914 00:56:00.206527   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 39/120
	I0914 00:56:01.208707   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 40/120
	I0914 00:56:02.210575   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 41/120
	I0914 00:56:03.211986   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 42/120
	I0914 00:56:04.213734   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 43/120
	I0914 00:56:05.215642   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 44/120
	I0914 00:56:06.217822   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 45/120
	I0914 00:56:07.219476   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 46/120
	I0914 00:56:08.220989   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 47/120
	I0914 00:56:09.223149   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 48/120
	I0914 00:56:10.224707   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 49/120
	I0914 00:56:11.226193   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 50/120
	I0914 00:56:12.228578   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 51/120
	I0914 00:56:13.230576   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 52/120
	I0914 00:56:14.231845   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 53/120
	I0914 00:56:15.233307   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 54/120
	I0914 00:56:16.235700   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 55/120
	I0914 00:56:17.237184   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 56/120
	I0914 00:56:18.238740   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 57/120
	I0914 00:56:19.240172   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 58/120
	I0914 00:56:20.241889   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 59/120
	I0914 00:56:21.244579   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 60/120
	I0914 00:56:22.246102   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 61/120
	I0914 00:56:23.248246   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 62/120
	I0914 00:56:24.250566   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 63/120
	I0914 00:56:25.252225   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 64/120
	I0914 00:56:26.254465   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 65/120
	I0914 00:56:27.256040   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 66/120
	I0914 00:56:28.258333   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 67/120
	I0914 00:56:29.260092   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 68/120
	I0914 00:56:30.261715   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 69/120
	I0914 00:56:31.263325   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 70/120
	I0914 00:56:32.265326   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 71/120
	I0914 00:56:33.266944   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 72/120
	I0914 00:56:34.269283   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 73/120
	I0914 00:56:35.271312   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 74/120
	I0914 00:56:36.273071   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 75/120
	I0914 00:56:37.274765   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 76/120
	I0914 00:56:38.276192   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 77/120
	I0914 00:56:39.277795   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 78/120
	I0914 00:56:40.279169   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 79/120
	I0914 00:56:41.280877   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 80/120
	I0914 00:56:42.282269   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 81/120
	I0914 00:56:43.283962   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 82/120
	I0914 00:56:44.286374   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 83/120
	I0914 00:56:45.287767   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 84/120
	I0914 00:56:46.290017   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 85/120
	I0914 00:56:47.291880   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 86/120
	I0914 00:56:48.293244   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 87/120
	I0914 00:56:49.295569   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 88/120
	I0914 00:56:50.297119   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 89/120
	I0914 00:56:51.299249   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 90/120
	I0914 00:56:52.300577   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 91/120
	I0914 00:56:53.302361   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 92/120
	I0914 00:56:54.303686   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 93/120
	I0914 00:56:55.305030   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 94/120
	I0914 00:56:56.307180   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 95/120
	I0914 00:56:57.308970   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 96/120
	I0914 00:56:58.310304   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 97/120
	I0914 00:56:59.312220   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 98/120
	I0914 00:57:00.314633   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 99/120
	I0914 00:57:01.316916   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 100/120
	I0914 00:57:02.319145   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 101/120
	I0914 00:57:03.320752   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 102/120
	I0914 00:57:04.322559   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 103/120
	I0914 00:57:05.323946   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 104/120
	I0914 00:57:06.325983   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 105/120
	I0914 00:57:07.327407   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 106/120
	I0914 00:57:08.328715   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 107/120
	I0914 00:57:09.330060   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 108/120
	I0914 00:57:10.331458   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 109/120
	I0914 00:57:11.333646   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 110/120
	I0914 00:57:12.335098   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 111/120
	I0914 00:57:13.336529   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 112/120
	I0914 00:57:14.338112   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 113/120
	I0914 00:57:15.339380   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 114/120
	I0914 00:57:16.341597   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 115/120
	I0914 00:57:17.343032   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 116/120
	I0914 00:57:18.344647   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 117/120
	I0914 00:57:19.346091   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 118/120
	I0914 00:57:20.347500   71756 main.go:141] libmachine: (no-preload-057857) Waiting for machine to stop 119/120
	I0914 00:57:21.348407   71756 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 00:57:21.348477   71756 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 00:57:21.350134   71756 out.go:201] 
	W0914 00:57:21.351173   71756 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 00:57:21.351192   71756 out.go:270] * 
	* 
	W0914 00:57:21.353858   71756 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:57:21.355108   71756 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-057857 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857: exit status 3 (18.43180965s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:39.788148   73215 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	E0914 00:57:39.788176   73215 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-057857" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-880490 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-880490 --alsologtostderr -v=3: exit status 82 (2m0.495367708s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-880490"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:57:05.417270   73047 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:57:05.417437   73047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:05.417448   73047 out.go:358] Setting ErrFile to fd 2...
	I0914 00:57:05.417454   73047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:05.417666   73047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:57:05.417924   73047 out.go:352] Setting JSON to false
	I0914 00:57:05.418016   73047 mustload.go:65] Loading cluster: embed-certs-880490
	I0914 00:57:05.418387   73047 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:57:05.418469   73047 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:57:05.418667   73047 mustload.go:65] Loading cluster: embed-certs-880490
	I0914 00:57:05.418790   73047 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:57:05.418837   73047 stop.go:39] StopHost: embed-certs-880490
	I0914 00:57:05.419305   73047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:57:05.419352   73047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:57:05.434212   73047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I0914 00:57:05.434663   73047 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:57:05.435223   73047 main.go:141] libmachine: Using API Version  1
	I0914 00:57:05.435243   73047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:57:05.435577   73047 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:57:05.437833   73047 out.go:177] * Stopping node "embed-certs-880490"  ...
	I0914 00:57:05.438725   73047 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 00:57:05.438752   73047 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:57:05.438969   73047 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 00:57:05.439004   73047 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 00:57:05.441621   73047 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 00:57:05.441997   73047 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 01:56:14 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 00:57:05.442028   73047 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 00:57:05.442164   73047 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 00:57:05.442324   73047 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 00:57:05.442481   73047 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 00:57:05.442608   73047 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 00:57:05.534296   73047 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 00:57:05.600665   73047 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 00:57:05.657621   73047 main.go:141] libmachine: Stopping "embed-certs-880490"...
	I0914 00:57:05.657700   73047 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 00:57:05.659456   73047 main.go:141] libmachine: (embed-certs-880490) Calling .Stop
	I0914 00:57:05.663293   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 0/120
	I0914 00:57:06.664796   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 1/120
	I0914 00:57:07.666299   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 2/120
	I0914 00:57:08.667885   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 3/120
	I0914 00:57:09.669276   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 4/120
	I0914 00:57:10.671299   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 5/120
	I0914 00:57:11.672602   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 6/120
	I0914 00:57:12.674009   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 7/120
	I0914 00:57:13.675490   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 8/120
	I0914 00:57:14.676969   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 9/120
	I0914 00:57:15.678726   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 10/120
	I0914 00:57:16.680345   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 11/120
	I0914 00:57:17.682366   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 12/120
	I0914 00:57:18.684239   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 13/120
	I0914 00:57:19.685754   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 14/120
	I0914 00:57:20.687607   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 15/120
	I0914 00:57:21.689054   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 16/120
	I0914 00:57:22.690409   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 17/120
	I0914 00:57:23.691953   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 18/120
	I0914 00:57:24.694390   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 19/120
	I0914 00:57:25.695662   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 20/120
	I0914 00:57:26.697070   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 21/120
	I0914 00:57:27.698666   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 22/120
	I0914 00:57:28.700043   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 23/120
	I0914 00:57:29.701644   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 24/120
	I0914 00:57:30.703842   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 25/120
	I0914 00:57:31.705162   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 26/120
	I0914 00:57:32.706768   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 27/120
	I0914 00:57:33.708219   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 28/120
	I0914 00:57:34.709742   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 29/120
	I0914 00:57:35.712331   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 30/120
	I0914 00:57:36.713725   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 31/120
	I0914 00:57:37.715636   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 32/120
	I0914 00:57:38.717131   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 33/120
	I0914 00:57:39.718593   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 34/120
	I0914 00:57:40.720964   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 35/120
	I0914 00:57:41.722661   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 36/120
	I0914 00:57:42.724404   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 37/120
	I0914 00:57:43.726455   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 38/120
	I0914 00:57:44.728079   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 39/120
	I0914 00:57:45.729847   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 40/120
	I0914 00:57:46.731592   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 41/120
	I0914 00:57:47.733420   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 42/120
	I0914 00:57:48.734992   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 43/120
	I0914 00:57:49.736553   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 44/120
	I0914 00:57:50.738965   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 45/120
	I0914 00:57:51.740461   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 46/120
	I0914 00:57:52.741971   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 47/120
	I0914 00:57:53.743582   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 48/120
	I0914 00:57:54.745070   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 49/120
	I0914 00:57:55.747631   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 50/120
	I0914 00:57:56.749241   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 51/120
	I0914 00:57:57.750735   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 52/120
	I0914 00:57:58.752260   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 53/120
	I0914 00:57:59.753660   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 54/120
	I0914 00:58:00.755661   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 55/120
	I0914 00:58:01.757117   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 56/120
	I0914 00:58:02.758830   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 57/120
	I0914 00:58:03.760392   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 58/120
	I0914 00:58:04.762301   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 59/120
	I0914 00:58:05.765009   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 60/120
	I0914 00:58:06.766589   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 61/120
	I0914 00:58:07.768024   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 62/120
	I0914 00:58:08.769506   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 63/120
	I0914 00:58:09.771041   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 64/120
	I0914 00:58:10.772999   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 65/120
	I0914 00:58:11.774420   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 66/120
	I0914 00:58:12.776387   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 67/120
	I0914 00:58:13.778587   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 68/120
	I0914 00:58:14.780206   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 69/120
	I0914 00:58:15.781656   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 70/120
	I0914 00:58:16.783089   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 71/120
	I0914 00:58:17.784397   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 72/120
	I0914 00:58:18.785887   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 73/120
	I0914 00:58:19.787205   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 74/120
	I0914 00:58:20.788907   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 75/120
	I0914 00:58:21.790226   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 76/120
	I0914 00:58:22.791642   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 77/120
	I0914 00:58:23.793255   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 78/120
	I0914 00:58:24.794754   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 79/120
	I0914 00:58:25.797102   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 80/120
	I0914 00:58:26.798796   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 81/120
	I0914 00:58:27.800180   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 82/120
	I0914 00:58:28.801686   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 83/120
	I0914 00:58:29.803229   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 84/120
	I0914 00:58:30.805246   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 85/120
	I0914 00:58:31.806538   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 86/120
	I0914 00:58:32.808316   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 87/120
	I0914 00:58:33.809682   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 88/120
	I0914 00:58:34.811426   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 89/120
	I0914 00:58:35.814025   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 90/120
	I0914 00:58:36.815447   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 91/120
	I0914 00:58:37.816915   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 92/120
	I0914 00:58:38.818386   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 93/120
	I0914 00:58:39.819807   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 94/120
	I0914 00:58:40.821999   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 95/120
	I0914 00:58:41.823320   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 96/120
	I0914 00:58:42.824815   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 97/120
	I0914 00:58:43.826190   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 98/120
	I0914 00:58:44.827558   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 99/120
	I0914 00:58:45.829762   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 100/120
	I0914 00:58:46.831024   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 101/120
	I0914 00:58:47.832376   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 102/120
	I0914 00:58:48.833740   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 103/120
	I0914 00:58:49.835109   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 104/120
	I0914 00:58:50.837501   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 105/120
	I0914 00:58:51.838848   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 106/120
	I0914 00:58:52.841077   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 107/120
	I0914 00:58:53.842514   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 108/120
	I0914 00:58:54.844274   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 109/120
	I0914 00:58:55.845695   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 110/120
	I0914 00:58:56.847034   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 111/120
	I0914 00:58:57.848454   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 112/120
	I0914 00:58:58.850171   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 113/120
	I0914 00:58:59.851630   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 114/120
	I0914 00:59:00.853817   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 115/120
	I0914 00:59:01.855163   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 116/120
	I0914 00:59:02.856569   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 117/120
	I0914 00:59:03.858088   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 118/120
	I0914 00:59:04.859616   73047 main.go:141] libmachine: (embed-certs-880490) Waiting for machine to stop 119/120
	I0914 00:59:05.860401   73047 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 00:59:05.860452   73047 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 00:59:05.862608   73047 out.go:201] 
	W0914 00:59:05.863862   73047 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 00:59:05.863883   73047 out.go:270] * 
	* 
	W0914 00:59:05.866398   73047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 00:59:05.867558   73047 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-880490 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
E0914 00:59:08.765623   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:09.642483   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490: exit status 3 (18.623364217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:59:24.492145   74106 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host
	E0914 00:59:24.492166   74106 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-880490" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-431084 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-431084 create -f testdata/busybox.yaml: exit status 1 (43.396702ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-431084" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-431084 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
E0914 00:57:21.306800   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 6 (215.530188ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:21.350633   73185 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-431084" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 6 (219.103701ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:21.573651   73221 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-431084" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-431084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-431084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.470446529s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-431084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-431084 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-431084 describe deploy/metrics-server -n kube-system: exit status 1 (47.346105ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-431084" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-431084 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 6 (215.067481ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:58:56.306177   73922 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-431084" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
E0914 00:57:27.361522   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332: exit status 3 (3.167587438s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:28.108168   73306 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E0914 00:57:28.108191   73306 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153711105s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332: exit status 3 (3.063359774s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:37.324121   73386 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E0914 00:57:37.324170   73386 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-754332" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857: exit status 3 (3.167968735s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:42.956252   73496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	E0914 00:57:42.956278   73496 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-057857 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0914 00:57:47.720428   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:47.843263   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-057857 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152503735s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-057857 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857: exit status 3 (3.063180274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:57:52.172317   73583 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	E0914 00:57:52.172337   73583 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-057857" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (723.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m0.059629328s)

                                                
                                                
-- stdout --
	* [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:58:58.905542   74039 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:58:58.905634   74039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:58:58.905649   74039 out.go:358] Setting ErrFile to fd 2...
	I0914 00:58:58.905658   74039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:58:58.905864   74039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:58:58.906404   74039 out.go:352] Setting JSON to false
	I0914 00:58:58.907411   74039 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6085,"bootTime":1726269454,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:58:58.907505   74039 start.go:139] virtualization: kvm guest
	I0914 00:58:58.910130   74039 out.go:177] * [old-k8s-version-431084] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:58:58.911490   74039 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:58:58.911488   74039 notify.go:220] Checking for updates...
	I0914 00:58:58.914386   74039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:58:58.915742   74039 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:58:58.917094   74039 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:58:58.918463   74039 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:58:58.919659   74039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:58:58.921522   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 00:58:58.921943   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:58:58.921982   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:58:58.937066   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0914 00:58:58.937583   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:58:58.938111   74039 main.go:141] libmachine: Using API Version  1
	I0914 00:58:58.938134   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:58:58.938445   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:58:58.938625   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:58:58.940513   74039 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 00:58:58.941675   74039 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:58:58.941990   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:58:58.942028   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:58:58.957053   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0914 00:58:58.957536   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:58:58.957979   74039 main.go:141] libmachine: Using API Version  1
	I0914 00:58:58.958004   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:58:58.958377   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:58:58.958535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 00:58:58.994931   74039 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:58:58.996116   74039 start.go:297] selected driver: kvm2
	I0914 00:58:58.996132   74039 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:58:58.996234   74039 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:58:58.997062   74039 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:58:58.997155   74039 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:58:59.012584   74039 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:58:59.013038   74039 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:58:59.013075   74039 cni.go:84] Creating CNI manager for ""
	I0914 00:58:59.013124   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:58:59.013171   74039 start.go:340] cluster config:
	{Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:58:59.013293   74039 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:58:59.016266   74039 out.go:177] * Starting "old-k8s-version-431084" primary control-plane node in "old-k8s-version-431084" cluster
	I0914 00:58:59.017798   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:58:59.017839   74039 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 00:58:59.017870   74039 cache.go:56] Caching tarball of preloaded images
	I0914 00:58:59.017960   74039 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:58:59.017971   74039 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 00:58:59.018086   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 00:58:59.018334   74039 start.go:360] acquireMachinesLock for old-k8s-version-431084: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	* 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	* 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-431084 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (224.789814ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25
E0914 01:11:00.510939   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25: (1.641765204s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.827396140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276260827375540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bded266-0d2b-4cd4-b3ca-899439ef2fd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.828207394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33d6b7e8-6daa-4a87-ae07-71c1a27da889 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.828256663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33d6b7e8-6daa-4a87-ae07-71c1a27da889 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.828300387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33d6b7e8-6daa-4a87-ae07-71c1a27da889 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.859939742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=533166e7-e47c-4da5-be29-34795e50a3c7 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.860010344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=533166e7-e47c-4da5-be29-34795e50a3c7 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.862965688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbb7e415-da54-482a-bfd0-11858fffc1e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.865620220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276260865590907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbb7e415-da54-482a-bfd0-11858fffc1e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.866424901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24e81576-0e8e-4fea-8895-85ff64010b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.866492585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24e81576-0e8e-4fea-8895-85ff64010b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.866526951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=24e81576-0e8e-4fea-8895-85ff64010b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.901551124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee6486ea-cd36-4df3-b93e-aaec41c8f6fe name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.901621978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee6486ea-cd36-4df3-b93e-aaec41c8f6fe name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.902928599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10305600-d44b-444e-9773-8b409a8569e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.903397956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276260903359364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10305600-d44b-444e-9773-8b409a8569e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.904080728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a10627e-166f-47e8-a48e-2da1c98f4664 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.904142508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a10627e-166f-47e8-a48e-2da1c98f4664 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.904176366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a10627e-166f-47e8-a48e-2da1c98f4664 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.936873905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=564d6877-00e9-4202-b125-517d1a1e7ba0 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.936943547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=564d6877-00e9-4202-b125-517d1a1e7ba0 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.938329586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe028eea-c0a3-4d62-b10c-055a79705796 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.938691738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276260938672090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe028eea-c0a3-4d62-b10c-055a79705796 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.939269303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3545629c-9edc-48ad-8a80-88cbbfe1efb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.939320110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3545629c-9edc-48ad-8a80-88cbbfe1efb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:11:00 old-k8s-version-431084 crio[634]: time="2024-09-14 01:11:00.939352236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3545629c-9edc-48ad-8a80-88cbbfe1efb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037690] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.969079] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.928925] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.082346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068199] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.169952] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.159964] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.281242] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Sep14 01:03] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309557] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +10.314900] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 01:07] systemd-fstab-generator[5021]: Ignoring "noauto" option for root device
	[Sep14 01:09] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.068389] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:11:01 up 8 min,  0 users,  load average: 0.02, 0.09, 0.06
	Linux old-k8s-version-431084 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bfafc0)
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: goroutine 165 [select]:
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c1fef0, 0x4f0ac20, 0xc000c14690, 0x1, 0xc0001000c0)
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024e7e0, 0xc0001000c0)
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000baadf0, 0xc000b8f7e0)
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 14 01:10:58 old-k8s-version-431084 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 14 01:10:58 old-k8s-version-431084 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 14 01:10:58 old-k8s-version-431084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 14 01:10:58 old-k8s-version-431084 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 14 01:10:58 old-k8s-version-431084 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5545]: I0914 01:10:58.914445    5545 server.go:416] Version: v1.20.0
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5545]: I0914 01:10:58.914824    5545 server.go:837] Client rotation is on, will bootstrap in background
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5545]: I0914 01:10:58.916610    5545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5545]: I0914 01:10:58.917483    5545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 14 01:10:58 old-k8s-version-431084 kubelet[5545]: W0914 01:10:58.917502    5545 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (232.850615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-431084" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (723.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490: exit status 3 (3.167471642s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:59:27.660140   74192 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host
	E0914 00:59:27.660178   74192 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0914 00:59:31.346759   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:31.535434   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152564912s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-880490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490: exit status 3 (3.063216391s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:59:36.876230   74272 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host
	E0914 00:59:36.876268   74272 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.105:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-880490" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880490 -n embed-certs-880490
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:16:53.109444845 +0000 UTC m=+6627.537090055
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880490 logs -n 25: (2.060304573s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.647924227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614647901875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3535426a-3b5d-4c99-9497-11beffd8e365 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.648309690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76c430e9-1d35-4e91-9f45-c8a357daabca name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.648360854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76c430e9-1d35-4e91-9f45-c8a357daabca name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.648560556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76c430e9-1d35-4e91-9f45-c8a357daabca name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.688993034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d56f6732-a396-48d2-a258-5e945004a2b9 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.689092130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d56f6732-a396-48d2-a258-5e945004a2b9 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.690794348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68dd032b-626e-46d7-9534-bba65368ed0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.691288056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614691237417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68dd032b-626e-46d7-9534-bba65368ed0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.691918846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f97b30a7-2b3a-46b3-8dab-360b3c4ea0c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.691983427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f97b30a7-2b3a-46b3-8dab-360b3c4ea0c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.692161760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f97b30a7-2b3a-46b3-8dab-360b3c4ea0c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.733659630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1eae798f-62c1-4042-8fe0-357bf5e6ceef name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.733735668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1eae798f-62c1-4042-8fe0-357bf5e6ceef name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.735640665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66716372-249d-4b9e-afa8-9a4080238e9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.736353035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614736328020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66716372-249d-4b9e-afa8-9a4080238e9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.736828587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d9f6214-efe4-40bf-818c-9871064649aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.736928366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d9f6214-efe4-40bf-818c-9871064649aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.737900817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d9f6214-efe4-40bf-818c-9871064649aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.768636752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f809d58-15f1-4ca7-a445-3cb60bb2b108 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.768743087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f809d58-15f1-4ca7-a445-3cb60bb2b108 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.769774632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f454f05e-3aa7-4028-a5e7-d2d9318073fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.770206323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614770183531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f454f05e-3aa7-4028-a5e7-d2d9318073fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.770638777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ff8b121-fec3-4625-9a20-e9b5bbe24d84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.770713660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ff8b121-fec3-4625-9a20-e9b5bbe24d84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:54 embed-certs-880490 crio[707]: time="2024-09-14 01:16:54.770983890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ff8b121-fec3-4625-9a20-e9b5bbe24d84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17df87a7f9d1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   89f9ec1e9a561       storage-provisioner
	5b58091e03cfa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   2f370938bf8c5       busybox
	107cc9128ebff       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   a2331c57a4dc3       coredns-7c65d6cfc9-ssskq
	b065365cf5210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   89f9ec1e9a561       storage-provisioner
	f0cf7d5e340de       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   4b00206cb00d6       kube-proxy-566n8
	5fd32fdb3cf8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   dc36e2645d7b9       kube-controller-manager-embed-certs-880490
	9bdf5d4a96c47       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   70abe12994016       kube-scheduler-embed-certs-880490
	dbe67fa760403       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   f20683b3260fd       kube-apiserver-embed-certs-880490
	80a81c3710a32       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   ea8bdc224dbb8       etcd-embed-certs-880490
	
	
	==> coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55581 - 34478 "HINFO IN 3891230327164374211.1707641094132755411. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01890407s
	
	
	==> describe nodes <==
	Name:               embed-certs-880490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=embed-certs-880490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:56:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880490
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:16:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:14:10 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:14:10 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:14:10 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:14:10 +0000   Sat, 14 Sep 2024 01:03:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.105
	  Hostname:    embed-certs-880490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eed308fb0627444096ccb4fa733de498
	  System UUID:                eed308fb-0627-4440-96cc-b4fa733de498
	  Boot ID:                    5f85edc3-8197-4198-8ad4-bcedfe67fdcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-ssskq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-embed-certs-880490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-embed-certs-880490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-880490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-566n8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-880490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-6867b74b74-4v8px               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     20m                kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node embed-certs-880490 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-880490 event: Registered Node embed-certs-880490 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-880490 event: Registered Node embed-certs-880490 in Controller
	
	
	==> dmesg <==
	[Sep14 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053594] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep14 01:03] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.900225] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.602607] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.084326] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.062362] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053727] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.184180] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.155415] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.301696] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.110246] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +2.388376] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.062818] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.568256] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.902715] systemd-fstab-generator[1544]: Ignoring "noauto" option for root device
	[  +3.760551] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.158200] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] <==
	{"level":"info","ts":"2024-09-14T01:03:23.314542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d113b8292a777974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:23.314567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d113b8292a777974 received MsgPreVoteResp from d113b8292a777974 at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:23.314580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d113b8292a777974 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:23.314586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d113b8292a777974 received MsgVoteResp from d113b8292a777974 at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:23.314594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d113b8292a777974 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:23.314614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d113b8292a777974 elected leader d113b8292a777974 at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:23.316422Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d113b8292a777974","local-member-attributes":"{Name:embed-certs-880490 ClientURLs:[https://192.168.50.105:2379]}","request-path":"/0/members/d113b8292a777974/attributes","cluster-id":"2dbe9e3b76acd0e0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:03:23.316421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:23.316579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:23.317645Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:23.317928Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:23.318458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.105:2379"}
	{"level":"info","ts":"2024-09-14T01:03:23.318586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:23.318610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:23.318770Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:03:38.178160Z","caller":"traceutil/trace.go:171","msg":"trace[75736895] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"144.655686ms","start":"2024-09-14T01:03:38.033472Z","end":"2024-09-14T01:03:38.178128Z","steps":["trace[75736895] 'read index received'  (duration: 144.4297ms)","trace[75736895] 'applied index is now lower than readState.Index'  (duration: 225.411µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T01:03:38.178352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.856802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-880490\" ","response":"range_response_count:1 size:5859"}
	{"level":"info","ts":"2024-09-14T01:03:38.178434Z","caller":"traceutil/trace.go:171","msg":"trace[975723025] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-880490; range_end:; response_count:1; response_revision:618; }","duration":"144.971035ms","start":"2024-09-14T01:03:38.033447Z","end":"2024-09-14T01:03:38.178418Z","steps":["trace[975723025] 'agreement among raft nodes before linearized reading'  (duration: 144.824813ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:03:38.178535Z","caller":"traceutil/trace.go:171","msg":"trace[225524194] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"470.330195ms","start":"2024-09-14T01:03:37.708192Z","end":"2024-09-14T01:03:38.178522Z","steps":["trace[225524194] 'process raft request'  (duration: 469.799007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T01:03:38.179149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:03:37.708176Z","time spent":"470.409338ms","remote":"127.0.0.1:46026","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:501 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2501 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >"}
	{"level":"warn","ts":"2024-09-14T01:03:38.398802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.083855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-880490\" ","response":"range_response_count:1 size:5487"}
	{"level":"info","ts":"2024-09-14T01:03:38.398905Z","caller":"traceutil/trace.go:171","msg":"trace[741603394] range","detail":"{range_begin:/registry/minions/embed-certs-880490; range_end:; response_count:1; response_revision:618; }","duration":"217.217608ms","start":"2024-09-14T01:03:38.181676Z","end":"2024-09-14T01:03:38.398893Z","steps":["trace[741603394] 'range keys from in-memory index tree'  (duration: 216.980027ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:13:23.351694Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":863}
	{"level":"info","ts":"2024-09-14T01:13:23.361836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":863,"took":"9.547241ms","hash":2038378596,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2670592,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-14T01:13:23.362014Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2038378596,"revision":863,"compact-revision":-1}
	
	
	==> kernel <==
	 01:16:55 up 13 min,  0 users,  load average: 0.04, 0.05, 0.07
	Linux embed-certs-880490 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] <==
	E0914 01:13:25.734652       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0914 01:13:25.734690       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:13:25.735831       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:13:25.735950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:14:25.736128       1 handler_proxy.go:99] no RequestInfo found in the context
	W0914 01:14:25.736150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:14:25.736381       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0914 01:14:25.736394       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:14:25.737546       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:14:25.737697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:16:25.737912       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:16:25.738233       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:16:25.737958       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:16:25.738333       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 01:16:25.739460       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:16:25.739497       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] <==
	E0914 01:11:28.314946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:11:28.788220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:11:58.320482       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:11:58.796363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:28.327480       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:28.803034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:58.334597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:58.810558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:13:28.341956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:28.820028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:13:58.347767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:58.829795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:14:10.386961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-880490"
	I0914 01:14:15.610223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="213.441µs"
	I0914 01:14:26.606701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="226.473µs"
	E0914 01:14:28.354486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:28.837324       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:14:58.360648       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:58.845072       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:15:28.367602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:28.852282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:15:58.373230       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:58.860359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:16:28.380737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:16:28.868249       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:03:26.330974       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:03:26.345943       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.105"]
	E0914 01:03:26.347021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:03:26.423643       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:03:26.423682       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:03:26.423739       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:03:26.428048       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:03:26.428377       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:03:26.428401       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:26.430394       1 config.go:199] "Starting service config controller"
	I0914 01:03:26.430439       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:03:26.430471       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:03:26.430489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:03:26.433656       1 config.go:328] "Starting node config controller"
	I0914 01:03:26.433732       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:03:26.531053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:03:26.531170       1 shared_informer.go:320] Caches are synced for service config
	I0914 01:03:26.534116       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] <==
	I0914 01:03:22.365568       1 serving.go:386] Generated self-signed cert in-memory
	W0914 01:03:24.676135       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:03:24.676184       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:03:24.676203       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:03:24.676209       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:03:24.733021       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 01:03:24.733068       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:24.735216       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 01:03:24.735257       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:03:24.735937       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 01:03:24.736015       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 01:03:24.836057       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:15:48 embed-certs-880490 kubelet[915]: E0914 01:15:48.592442     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:15:49 embed-certs-880490 kubelet[915]: E0914 01:15:49.788229     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276549787913303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:15:49 embed-certs-880490 kubelet[915]: E0914 01:15:49.788508     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276549787913303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:15:59 embed-certs-880490 kubelet[915]: E0914 01:15:59.592503     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:15:59 embed-certs-880490 kubelet[915]: E0914 01:15:59.789987     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276559789726772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:15:59 embed-certs-880490 kubelet[915]: E0914 01:15:59.790014     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276559789726772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:09 embed-certs-880490 kubelet[915]: E0914 01:16:09.791622     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276569791268926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:09 embed-certs-880490 kubelet[915]: E0914 01:16:09.792083     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276569791268926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:10 embed-certs-880490 kubelet[915]: E0914 01:16:10.592392     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]: E0914 01:16:19.608482     915 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]: E0914 01:16:19.793893     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276579793462026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:19 embed-certs-880490 kubelet[915]: E0914 01:16:19.794096     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276579793462026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:23 embed-certs-880490 kubelet[915]: E0914 01:16:23.593064     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:16:29 embed-certs-880490 kubelet[915]: E0914 01:16:29.796034     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276589795605300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:29 embed-certs-880490 kubelet[915]: E0914 01:16:29.796067     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276589795605300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:36 embed-certs-880490 kubelet[915]: E0914 01:16:36.593128     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:16:39 embed-certs-880490 kubelet[915]: E0914 01:16:39.798464     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276599798100367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:39 embed-certs-880490 kubelet[915]: E0914 01:16:39.798756     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276599798100367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:47 embed-certs-880490 kubelet[915]: E0914 01:16:47.594111     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:16:49 embed-certs-880490 kubelet[915]: E0914 01:16:49.800993     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276609800457638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:49 embed-certs-880490 kubelet[915]: E0914 01:16:49.801293     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276609800457638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] <==
	I0914 01:03:56.873195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:03:56.883366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:03:56.883428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:04:14.283270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:04:14.283486       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2!
	I0914 01:04:14.283590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7fe8bb4-cb91-41b4-90e8-cd5d59913cd9", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2 became leader
	I0914 01:04:14.384591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2!
	
	
	==> storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] <==
	I0914 01:03:26.371505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 01:03:56.380497       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880490 -n embed-certs-880490
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-880490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4v8px
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px: exit status 1 (62.002402ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4v8px" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 01:08:09.408510   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-057857 -n no-preload-057857
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:16:57.412547341 +0000 UTC m=+6631.840192536
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-057857 logs -n 25
E0914 01:16:58.155868   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-057857 logs -n 25: (2.152968435s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.934569988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276618934542520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ea5a2d4-4ff5-4cbf-baaf-efa2fab9745b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.935136467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0739dcb-6a30-4814-a9f6-4d4029c96fd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.935255000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0739dcb-6a30-4814-a9f6-4d4029c96fd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.935487571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0739dcb-6a30-4814-a9f6-4d4029c96fd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.972251284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9fea61e-4318-4410-b07b-88c13d3a4e4c name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.972327697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9fea61e-4318-4410-b07b-88c13d3a4e4c name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.973633123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1674dd34-41e1-4a41-9963-48ecaf20467b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.974187655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276618974135888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1674dd34-41e1-4a41-9963-48ecaf20467b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.974964238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca39b365-2412-439f-a7fc-16bac11e198d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.975061102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca39b365-2412-439f-a7fc-16bac11e198d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:58 no-preload-057857 crio[707]: time="2024-09-14 01:16:58.975420957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca39b365-2412-439f-a7fc-16bac11e198d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.018998550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96f05d81-8bbc-4e5a-9283-ebe53adf5674 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.019103388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96f05d81-8bbc-4e5a-9283-ebe53adf5674 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.020456532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e92adafe-5a1d-4ce6-8bc6-cd4fa3a03002 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.020864168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276619020784558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e92adafe-5a1d-4ce6-8bc6-cd4fa3a03002 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.021438620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eac39a00-1453-461e-a51b-d4ee2acbce97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.021500408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eac39a00-1453-461e-a51b-d4ee2acbce97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.021707220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eac39a00-1453-461e-a51b-d4ee2acbce97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.058091601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36fee0c3-f451-4315-a339-44103fcbc54f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.058224857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36fee0c3-f451-4315-a339-44103fcbc54f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.059488445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=201f16e3-907f-4f11-8d81-ca1bceb37872 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.059943900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276619059899377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=201f16e3-907f-4f11-8d81-ca1bceb37872 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.060502863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51fc3b60-9ec2-497a-acf1-9986c6314061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.060566267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51fc3b60-9ec2-497a-acf1-9986c6314061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:16:59 no-preload-057857 crio[707]: time="2024-09-14 01:16:59.060802389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51fc3b60-9ec2-497a-acf1-9986c6314061 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d9d3a688e481       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   fb24127a7326e       coredns-7c65d6cfc9-jqk6k
	8a8da47be06ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   1cdcee7c292eb       coredns-7c65d6cfc9-52vdb
	8daf98a703f89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8a439c2545ee7       storage-provisioner
	dd7bb23d93588       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   d865833a3ef13       kube-proxy-m6d75
	6e6a8583ab886       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   b5cd553992fd1       kube-scheduler-no-preload-057857
	51a277db64b96       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   25514cd4c9cec       kube-apiserver-no-preload-057857
	5267b5229d2c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ac4823a25b964       etcd-no-preload-057857
	1b84acc249655       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   7b7b7b9b2be17       kube-controller-manager-no-preload-057857
	5ed647f42f39c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   52f5718e13ae1       kube-apiserver-no-preload-057857
	
	
	==> coredns [7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-057857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-057857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=no-preload-057857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 01:07:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-057857
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:13:00 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:13:00 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:13:00 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:13:00 +0000   Sat, 14 Sep 2024 01:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    no-preload-057857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef9e0c97a2104446af328f45caad6a6f
	  System UUID:                ef9e0c97-a210-4446-af32-8f45caad6a6f
	  Boot ID:                    914bc9f4-9209-4c8f-8750-74d7cb6ca8e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-52vdb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-jqk6k                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-no-preload-057857                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-no-preload-057857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-no-preload-057857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-m6d75                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-no-preload-057857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-d78nt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m15s (x2 over 9m15s)  kubelet          Node no-preload-057857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x2 over 9m15s)  kubelet          Node no-preload-057857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x2 over 9m15s)  kubelet          Node no-preload-057857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m11s                  node-controller  Node no-preload-057857 event: Registered Node no-preload-057857 in Controller
	
	
	==> dmesg <==
	[  +0.050817] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769878] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.913723] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.532564] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.693656] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.066900] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058408] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.195235] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.128298] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.295720] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.452579] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.059300] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.967947] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.424083] kauditd_printk_skb: 97 callbacks suppressed
	[Sep14 01:03] kauditd_printk_skb: 86 callbacks suppressed
	[Sep14 01:07] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.897177] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +4.692564] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.893055] systemd-fstab-generator[3330]: Ignoring "noauto" option for root device
	[  +4.911694] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +0.134125] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.403521] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e] <==
	{"level":"info","ts":"2024-09-14T01:07:39.348448Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T01:07:39.348981Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T01:07:39.350608Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-09-14T01:07:39.350994Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T01:07:39.351019Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-09-14T01:07:39.564953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T01:07:39.565092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T01:07:39.565129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 1"}
	{"level":"info","ts":"2024-09-14T01:07:39.565236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.570165Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.575027Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:no-preload-057857 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:07:39.575076Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:07:39.575157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:07:39.580752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:07:39.596622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:07:39.596662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:07:39.586309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.599929Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.591244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:07:39.600773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:07:39.600927Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.606069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	
	
	==> kernel <==
	 01:16:59 up 14 min,  0 users,  load average: 0.40, 0.38, 0.18
	Linux no-preload-057857 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4] <==
	W0914 01:12:42.493680       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:12:42.493986       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:12:42.496084       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:12:42.496206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:13:42.496861       1 handler_proxy.go:99] no RequestInfo found in the context
	W0914 01:13:42.496943       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:13:42.497011       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0914 01:13:42.497027       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:13:42.498149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:13:42.498188       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:15:42.498691       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:15:42.498896       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 01:15:42.498972       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:15:42.499083       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:15:42.500263       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:15:42.500333       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417] <==
	W0914 01:07:31.206059       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.209467       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.212972       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.266408       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.307778       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.430114       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.441918       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.534297       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.534298       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.562493       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.571114       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.573590       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.584032       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.594230       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.595473       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.598862       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.645381       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.724543       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.783121       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.790685       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.843139       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.862096       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.884109       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:32.034517       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:32.201599       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489] <==
	E0914 01:11:48.351692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:11:48.885635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:18.358377       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:18.893559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:48.365943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:48.901681       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:13:00.216451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-057857"
	E0914 01:13:18.373017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:18.909700       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:13:48.379607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:48.917014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:13:55.406858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="207.058µs"
	I0914 01:14:08.405652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.554µs"
	E0914 01:14:18.386197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:18.924915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:14:48.398727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:48.934254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:15:18.404150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:18.943768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:15:48.411786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:48.952338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:16:18.419687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:16:18.962006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:16:48.428387       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:16:48.970300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:07:50.123393       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:07:50.142609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0914 01:07:50.142698       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:07:50.206277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:07:50.206321       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:07:50.206349       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:07:50.208888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:07:50.209179       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:07:50.209207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:07:50.210449       1 config.go:199] "Starting service config controller"
	I0914 01:07:50.210488       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:07:50.210512       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:07:50.210528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:07:50.211113       1 config.go:328] "Starting node config controller"
	I0914 01:07:50.211137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:07:50.311743       1 shared_informer.go:320] Caches are synced for service config
	I0914 01:07:50.311803       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:07:50.313659       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136] <==
	W0914 01:07:42.392720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 01:07:42.393336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.394488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 01:07:42.395872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.409148       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 01:07:42.409675       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 01:07:42.478104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 01:07:42.478249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.495402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 01:07:42.495651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.530095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 01:07:42.530215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.554227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.554355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.593126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.593191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.651438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.651659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.709346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 01:07:42.709415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.791901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 01:07:42.791951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.792797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.792865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 01:07:44.414058       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:15:47 no-preload-057857 kubelet[3337]: E0914 01:15:47.390091    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:15:54 no-preload-057857 kubelet[3337]: E0914 01:15:54.558495    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276554558159492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:15:54 no-preload-057857 kubelet[3337]: E0914 01:15:54.558874    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276554558159492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:02 no-preload-057857 kubelet[3337]: E0914 01:16:02.393654    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:16:04 no-preload-057857 kubelet[3337]: E0914 01:16:04.560509    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276564559969111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:04 no-preload-057857 kubelet[3337]: E0914 01:16:04.560911    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276564559969111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:13 no-preload-057857 kubelet[3337]: E0914 01:16:13.389566    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:16:14 no-preload-057857 kubelet[3337]: E0914 01:16:14.563525    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276574563181808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:14 no-preload-057857 kubelet[3337]: E0914 01:16:14.563888    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276574563181808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:24 no-preload-057857 kubelet[3337]: E0914 01:16:24.566108    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276584565678545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:24 no-preload-057857 kubelet[3337]: E0914 01:16:24.566142    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276584565678545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:28 no-preload-057857 kubelet[3337]: E0914 01:16:28.390316    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:16:34 no-preload-057857 kubelet[3337]: E0914 01:16:34.569617    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276594569086359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:34 no-preload-057857 kubelet[3337]: E0914 01:16:34.569669    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276594569086359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:41 no-preload-057857 kubelet[3337]: E0914 01:16:41.390976    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]: E0914 01:16:44.404255    3337 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]: E0914 01:16:44.571779    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276604571383397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:44 no-preload-057857 kubelet[3337]: E0914 01:16:44.571832    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276604571383397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:54 no-preload-057857 kubelet[3337]: E0914 01:16:54.573505    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614573025698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:54 no-preload-057857 kubelet[3337]: E0914 01:16:54.574144    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276614573025698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:55 no-preload-057857 kubelet[3337]: E0914 01:16:55.391095    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	
	
	==> storage-provisioner [8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce] <==
	I0914 01:07:50.705551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:07:50.715738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:07:50.715798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:07:50.732764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:07:50.733417       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b568b778-0489-476e-97d6-3d355719ba43", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc became leader
	I0914 01:07:50.735992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc!
	I0914 01:07:50.836835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-057857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d78nt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt: exit status 1 (71.757204ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d78nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 01:08:27.787494   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:09:31.535554   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:09:37.445970   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:10:11.672578   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:17:11.61436895 +0000 UTC m=+6646.042014145
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-754332 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-754332 logs -n 25: (2.017730452s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.087735362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d1aa63b-3696-4498-9e60-ec9ecbbc7d89 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.087919264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d1aa63b-3696-4498-9e60-ec9ecbbc7d89 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.123364954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a62af96-1a58-4dd4-b4ab-01c84b0c800b name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.123516095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a62af96-1a58-4dd4-b4ab-01c84b0c800b name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.124554041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2535046d-5481-401b-9555-68b6444d42e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.124935761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276633124914652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2535046d-5481-401b-9555-68b6444d42e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.125443074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e517010-6ee2-4f86-9b49-74a57bda6d8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.125510844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e517010-6ee2-4f86-9b49-74a57bda6d8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.125694884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e517010-6ee2-4f86-9b49-74a57bda6d8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.162348314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2753ffc3-64ba-4a85-9319-111a311847da name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.162493000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2753ffc3-64ba-4a85-9319-111a311847da name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.163869393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28830647-9d96-4743-bfbc-02b8e371e36c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.164310103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276633164289828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28830647-9d96-4743-bfbc-02b8e371e36c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.164826237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f19a4f4-b88e-46db-bb7f-9f4bc684861a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.165000780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f19a4f4-b88e-46db-bb7f-9f4bc684861a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.165939936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f19a4f4-b88e-46db-bb7f-9f4bc684861a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.193351418Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=fa5bb65b-1db6-4627-9d72-d76c6cf51fc8 name=/runtime.v1.RuntimeService/Status
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.193471151Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=fa5bb65b-1db6-4627-9d72-d76c6cf51fc8 name=/runtime.v1.RuntimeService/Status
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.199656547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f2400cb-d9f5-43d8-b089-e9ecc25b31c8 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.199733494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f2400cb-d9f5-43d8-b089-e9ecc25b31c8 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.200622555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c413279d-9365-4908-b892-c4f30257156d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.201013488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276633200991009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c413279d-9365-4908-b892-c4f30257156d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.201515966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe5831df-8711-4a11-9d59-d5101307195e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.201584687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe5831df-8711-4a11-9d59-d5101307195e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:17:13 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:17:13.201775438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe5831df-8711-4a11-9d59-d5101307195e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd70c0b225453       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   367349f125649       storage-provisioner
	d223f20a7b200       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   dbae3367a6ac4       busybox
	eed5d3016c514       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   49efd604c9284       coredns-7c65d6cfc9-5lgsh
	a208a2f3609d0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   bc1ec75378bb3       kube-proxy-f9qhk
	6342974eea142       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   367349f125649       storage-provisioner
	6234a7bcd6d95       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   d48bd95d30495       etcd-default-k8s-diff-port-754332
	b88f0f70ed0bd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   cf79173828d05       kube-controller-manager-default-k8s-diff-port-754332
	e409487833e23       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   7b7534afde500       kube-scheduler-default-k8s-diff-port-754332
	38c2a1c006d77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   13002c9acc747       kube-apiserver-default-k8s-diff-port-754332
	
	
	==> coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53310 - 37392 "HINFO IN 1613091291824127344.2356255575009687738. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013987659s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-754332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-754332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=default-k8s-diff-port-754332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_54_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:54:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-754332
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:14:26 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:14:26 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:14:26 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:14:26 +0000   Sat, 14 Sep 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.203
	  Hostname:    default-k8s-diff-port-754332
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8e2a2e1f8984b5881f3db0787376198
	  System UUID:                b8e2a2e1-f898-4b58-81f3-db0787376198
	  Boot ID:                    ad514a84-2928-48e5-84c0-914dfa6e7281
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-5lgsh                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-default-k8s-diff-port-754332                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-754332             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-754332    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-f9qhk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-754332             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-lxzvw                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-754332 event: Registered Node default-k8s-diff-port-754332 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-754332 event: Registered Node default-k8s-diff-port-754332 in Controller
	
	
	==> dmesg <==
	[Sep14 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056929] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039235] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910810] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.970690] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571009] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.257465] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.063447] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.166845] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.137655] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.314761] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.049921] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.674591] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +0.065160] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.502941] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.437536] systemd-fstab-generator[1544]: Ignoring "noauto" option for root device
	[  +3.271496] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.076296] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] <==
	{"level":"info","ts":"2024-09-14T01:03:40.049767Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T01:03:40.049752Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T01:03:40.049992Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e420fb3f9edbaec1","local-member-id":"fd1c782511c6d1a","added-peer-id":"fd1c782511c6d1a","added-peer-peer-urls":["https://192.168.72.203:2380"]}
	{"level":"info","ts":"2024-09-14T01:03:40.052708Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e420fb3f9edbaec1","local-member-id":"fd1c782511c6d1a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:03:40.052791Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:03:41.073471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:41.073521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:41.073548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgPreVoteResp from fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:41.073560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgVoteResp from fd1c782511c6d1a at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became leader at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fd1c782511c6d1a elected leader fd1c782511c6d1a at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.075959Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fd1c782511c6d1a","local-member-attributes":"{Name:default-k8s-diff-port-754332 ClientURLs:[https://192.168.72.203:2379]}","request-path":"/0/members/fd1c782511c6d1a/attributes","cluster-id":"e420fb3f9edbaec1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:03:41.076122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:41.076503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:41.077253Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:41.078236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.203:2379"}
	{"level":"info","ts":"2024-09-14T01:03:41.078324Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:41.078365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:41.079039Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:41.081193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:03:56.622062Z","caller":"traceutil/trace.go:171","msg":"trace[1274899830] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"111.416158ms","start":"2024-09-14T01:03:56.510623Z","end":"2024-09-14T01:03:56.622039Z","steps":["trace[1274899830] 'process raft request'  (duration: 111.313363ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:13:41.129959Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-09-14T01:13:41.139880Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"9.203053ms","hash":698923900,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2842624,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-14T01:13:41.139957Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":698923900,"revision":865,"compact-revision":-1}
	
	
	==> kernel <==
	 01:17:13 up 13 min,  0 users,  load average: 0.13, 0.13, 0.10
	Linux default-k8s-diff-port-754332 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] <==
	E0914 01:13:43.449289       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 01:13:43.449177       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 01:13:43.450612       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:13:43.450750       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:14:43.451765       1 handler_proxy.go:99] no RequestInfo found in the context
	W0914 01:14:43.451794       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:14:43.452162       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0914 01:14:43.452173       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:14:43.453357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:14:43.453382       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:16:43.454593       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:16:43.454724       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:16:43.454593       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:16:43.454768       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 01:16:43.456082       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:16:43.456092       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] <==
	E0914 01:11:46.122245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:11:46.607079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:16.128598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:16.613889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:12:46.136201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:12:46.621940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:13:16.145801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:16.629717       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:13:46.152298       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:13:46.637784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:14:16.158123       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:16.644795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:14:26.363686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-754332"
	I0914 01:14:45.922129       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="248.104µs"
	E0914 01:14:46.166299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:14:46.651903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:14:58.921048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="133.597µs"
	E0914 01:15:16.173043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:16.659076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:15:46.178636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:15:46.667214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:16:16.184484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:16:16.676219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:16:46.190966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:16:46.685792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:03:43.597133       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:03:43.611220       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.203"]
	E0914 01:03:43.613473       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:03:43.668084       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:03:43.668134       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:03:43.668165       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:03:43.670736       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:03:43.671039       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:03:43.671053       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:43.673291       1 config.go:199] "Starting service config controller"
	I0914 01:03:43.673377       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:03:43.674166       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:03:43.679876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:03:43.679954       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:03:43.676536       1 config.go:328] "Starting node config controller"
	I0914 01:03:43.680006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:03:43.780260       1 shared_informer.go:320] Caches are synced for node config
	I0914 01:03:43.780292       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] <==
	I0914 01:03:40.535118       1 serving.go:386] Generated self-signed cert in-memory
	W0914 01:03:42.375967       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:03:42.376486       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:03:42.376546       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:03:42.376570       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:03:42.471380       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 01:03:42.471657       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:42.484140       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 01:03:42.484294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 01:03:42.484328       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:03:42.484346       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 01:03:42.584788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:15:59 default-k8s-diff-port-754332 kubelet[921]: E0914 01:15:59.904345     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:16:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:08.123904     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276568122996710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:08.126988     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276568122996710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:14 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:14.904078     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:16:18 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:18.131318     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276578131064080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:18 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:18.131344     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276578131064080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:28 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:28.133050     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276588132762653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:28 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:28.133484     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276588132762653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:28 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:28.904352     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:16:37 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:37.938446     921 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:16:37 default-k8s-diff-port-754332 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:16:37 default-k8s-diff-port-754332 kubelet[921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:16:37 default-k8s-diff-port-754332 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:16:37 default-k8s-diff-port-754332 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:16:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:38.135060     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276598134728184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:38.135090     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276598134728184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:42 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:42.904083     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:16:48 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:48.137577     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276608136433721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:48 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:48.138227     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276608136433721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:56 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:56.903138     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:16:58 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:58.141294     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276618140856798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:16:58 default-k8s-diff-port-754332 kubelet[921]: E0914 01:16:58.141337     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276618140856798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:17:07 default-k8s-diff-port-754332 kubelet[921]: E0914 01:17:07.905019     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:17:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:17:08.143782     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276628143322609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:17:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:17:08.143829     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276628143322609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] <==
	I0914 01:03:43.429975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 01:04:13.446182       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] <==
	I0914 01:04:14.225116       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:04:14.236318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:04:14.236459       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:04:31.637423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:04:31.637745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd!
	I0914 01:04:31.638016       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96e3f175-2c30-4e03-b51a-193762063bcd", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd became leader
	I0914 01:04:31.738117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lxzvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw: exit status 1 (62.772075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lxzvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:11:25.780976   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:11:34.736220   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:11:58.155948   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:12:06.865845   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:12:20.624545   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:12:34.611824   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:12:48.846754   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:13:09.408751   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:13:21.220487   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:13:27.787678   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:13:29.931135   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:14:31.535728   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:14:32.473023   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:14:37.446883   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:14:50.851972   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:15:11.672413   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:16:25.781589   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:17:06.865253   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:17:20.624608   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:18:09.409348   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:18:27.787415   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:19:31.535280   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:19:37.446602   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (245.294781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-431084" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (229.382153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25: (1.713761034s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.393555822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276804393535693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a75515b1-fe0d-4fc0-abb4-7e85b89c98bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.394099635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82646956-6d1d-49cc-9c66-92338d2a5fde name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.394169019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82646956-6d1d-49cc-9c66-92338d2a5fde name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.394205748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=82646956-6d1d-49cc-9c66-92338d2a5fde name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.424651196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0a738b3-6b67-466b-a627-348e957dafc2 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.424784268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0a738b3-6b67-466b-a627-348e957dafc2 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.426342139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c21047c9-7b76-4cee-b830-8aed4a90b4fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.426776313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276804426756309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c21047c9-7b76-4cee-b830-8aed4a90b4fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.427236498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b21f3a59-33c5-4fa9-b481-76ae3c3ad8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.427284809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b21f3a59-33c5-4fa9-b481-76ae3c3ad8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.427319320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b21f3a59-33c5-4fa9-b481-76ae3c3ad8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.460601018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=187598af-bbde-488a-99dd-de5b3e752f3f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.460684202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=187598af-bbde-488a-99dd-de5b3e752f3f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.461844674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=181cc8b5-a6ae-43a6-9293-89768ac1cd3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.462304438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276804462278940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=181cc8b5-a6ae-43a6-9293-89768ac1cd3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.462936040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0e07a85-7e40-4858-9f01-bf6c31b549b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.462987703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0e07a85-7e40-4858-9f01-bf6c31b549b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.463021998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f0e07a85-7e40-4858-9f01-bf6c31b549b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.501666481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d74553a-c2e0-47fc-a105-6c0287ad542e name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.501837882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d74553a-c2e0-47fc-a105-6c0287ad542e name=/runtime.v1.RuntimeService/Version
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.503134463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a8cb124-01db-462a-8895-d893bd1ff021 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.503764734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276804503680355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a8cb124-01db-462a-8895-d893bd1ff021 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.505079473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2576ab26-0595-4a3d-8bfa-c2241b1a27fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.505144779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2576ab26-0595-4a3d-8bfa-c2241b1a27fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:20:04 old-k8s-version-431084 crio[634]: time="2024-09-14 01:20:04.505179736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2576ab26-0595-4a3d-8bfa-c2241b1a27fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037690] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.969079] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.928925] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.082346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068199] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.169952] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.159964] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.281242] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Sep14 01:03] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309557] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +10.314900] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 01:07] systemd-fstab-generator[5021]: Ignoring "noauto" option for root device
	[Sep14 01:09] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.068389] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:20:04 up 17 min,  0 users,  load average: 0.00, 0.01, 0.02
	Linux old-k8s-version-431084 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net.(*sysDialer).dialTCP(0xc000b69000, 0x4f7fe40, 0xc00027fa40, 0x0, 0xc000b38450, 0x57b620, 0x48ab5d6, 0x7f9b0dcc1080)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net.(*sysDialer).dialSingle(0xc000b69000, 0x4f7fe40, 0xc00027fa40, 0x4f1ff00, 0xc000b38450, 0x0, 0x0, 0x0, 0x0)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net.(*sysDialer).dialSerial(0xc000b69000, 0x4f7fe40, 0xc00027fa40, 0xc000b0a600, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/dial.go:548 +0x152
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net.(*Dialer).DialContext(0xc000125f20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0001dbdd0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b3b9e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0001dbdd0, 0x24, 0x60, 0x7f9b0d2a4df0, 0x118, ...)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net/http.(*Transport).dial(0xc00097c140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0001dbdd0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net/http.(*Transport).dialConn(0xc00097c140, 0x4f7fe00, 0xc000120018, 0x0, 0xc00042a600, 0x5, 0xc0001dbdd0, 0x24, 0x0, 0xc000a3aa20, ...)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net/http.(*Transport).dialConnFor(0xc00097c140, 0xc0000da790)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: created by net/http.(*Transport).queueForDial
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: goroutine 171 [select]:
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: net.(*netFD).connect.func2(0x4f7fe40, 0xc00027fa40, 0xc000b69100, 0xc0006ead80, 0xc0006ead20)
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]: created by net.(*netFD).connect
	Sep 14 01:20:04 old-k8s-version-431084 kubelet[6489]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 14 01:20:04 old-k8s-version-431084 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 14 01:20:04 old-k8s-version-431084 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (225.269879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-431084" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880490 -n embed-certs-880490
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:25:57.276143888 +0000 UTC m=+7171.703789085
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-880490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-880490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (56.716562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-880490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-880490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-880490 logs -n 25: (2.061983291s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 01:22 UTC | 14 Sep 24 01:22 UTC |
	| delete  | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 01:23 UTC | 14 Sep 24 01:23 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 01:23 UTC | 14 Sep 24 01:23 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.874361453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277158874332107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f00d58-b467-4556-88d6-497cfc75a877 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.874985675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15ef38fc-0766-4208-88c8-c831a71be3a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.875067068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15ef38fc-0766-4208-88c8-c831a71be3a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.875283581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15ef38fc-0766-4208-88c8-c831a71be3a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.911237482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e04c59b-5d1a-4018-a7cb-dfa3f30df090 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.911331089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e04c59b-5d1a-4018-a7cb-dfa3f30df090 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.912823373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5049fdd2-fed7-4352-b282-ea85cbcb27d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.913320996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277158913296125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5049fdd2-fed7-4352-b282-ea85cbcb27d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.914008863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=049dbcd1-c7c5-4c68-9810-ad8acc74e712 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.914073103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=049dbcd1-c7c5-4c68-9810-ad8acc74e712 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.914275110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=049dbcd1-c7c5-4c68-9810-ad8acc74e712 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.949225480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72beb88d-e356-4ab1-beca-36d844f0f907 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.949318530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72beb88d-e356-4ab1-beca-36d844f0f907 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.950664595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a9c7ede-ffb6-4654-9727-f64015cc54dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.951203169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277158951179944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a9c7ede-ffb6-4654-9727-f64015cc54dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.951662391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cfd8af2-d905-440b-a539-1181744e0600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.951721539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cfd8af2-d905-440b-a539-1181744e0600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.951965937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cfd8af2-d905-440b-a539-1181744e0600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.985843212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae68fef9-efc6-4851-97d1-bc0206affa09 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.985980565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae68fef9-efc6-4851-97d1-bc0206affa09 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.987201741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d258e9c-b218-4300-972f-79017c482654 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.987632457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277158987607739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d258e9c-b218-4300-972f-79017c482654 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.988381713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=648697c5-108d-4758-88b4-5a6a64c3c899 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.988445811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=648697c5-108d-4758-88b4-5a6a64c3c899 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:25:58 embed-certs-880490 crio[707]: time="2024-09-14 01:25:58.988650594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275836779341978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58091e03cfa3a7caff475ae68f522a287d1b1fe68fcb5f2d64606213027dfe,PodSandboxId:2f370938bf8c585152bbcf60ac119a3146ce06415d690c79d6b474a10bbd41c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275816739787617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3e9a62c-f6d6-4fb8-bb58-13444f20ce95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db,PodSandboxId:a2331c57a4dc3a485de1811d716cb69c3593ca8fec19e9dae9df501080543720,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275813779017878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ssskq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74eab481-dc57-4dd2-a673-33e7d853cee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c,PodSandboxId:89f9ec1e9a56189cca72eca30a75bea6a2002945f3d31d2b2fe09d987af5e94b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275806145343903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7d1d7c67-c4e8-4520-8385-8ea8668177e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98,PodSandboxId:4b00206cb00d64c43641400638c55f716a0bf056ecb4590c7268831c9d4837f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275806015206591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-566n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6fbcc6d-aa8a-4d4a-ab64-929170c01
a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a,PodSandboxId:ea8bdc224dbb87989e8134b44226ff1fc67a88abce2db96a9bac8f30a41c199c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275801223754655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad6d632d381ed4ab1ee6048ec4c6369,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637,PodSandboxId:70abe12994016819ceeef52b8664226770628abf9c9304999a286eb5c7698e4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275801252824531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87df7382585ee576b9d792d7d04cda24,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b,PodSandboxId:dc36e2645d7b9c4aa9a44aa551ff4897b832222d2f648a599fb258b10d8527cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275801264273638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f3b8baf1ec7aaf3248067ebe0da874,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9,PodSandboxId:f20683b3260fdd5510074a8255fe0ec2fad84962acd8c7fd2f20ec8a63674e32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275801244773871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-880490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d21702a29a94f62edc3bf74089631c2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=648697c5-108d-4758-88b4-5a6a64c3c899 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17df87a7f9d1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   89f9ec1e9a561       storage-provisioner
	5b58091e03cfa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   2f370938bf8c5       busybox
	107cc9128ebff       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago      Running             coredns                   1                   a2331c57a4dc3       coredns-7c65d6cfc9-ssskq
	b065365cf5210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   89f9ec1e9a561       storage-provisioner
	f0cf7d5e340de       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   4b00206cb00d6       kube-proxy-566n8
	5fd32fdb3cf8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   1                   dc36e2645d7b9       kube-controller-manager-embed-certs-880490
	9bdf5d4a96c47       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago      Running             kube-scheduler            1                   70abe12994016       kube-scheduler-embed-certs-880490
	dbe67fa760403       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            1                   f20683b3260fd       kube-apiserver-embed-certs-880490
	80a81c3710a32       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   ea8bdc224dbb8       etcd-embed-certs-880490
	
	
	==> coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55581 - 34478 "HINFO IN 3891230327164374211.1707641094132755411. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01890407s
	
	
	==> describe nodes <==
	Name:               embed-certs-880490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-880490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=embed-certs-880490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:56:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-880490
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:25:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:24:23 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:24:23 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:24:23 +0000   Sat, 14 Sep 2024 00:56:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:24:23 +0000   Sat, 14 Sep 2024 01:03:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.105
	  Hostname:    embed-certs-880490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eed308fb0627444096ccb4fa733de498
	  System UUID:                eed308fb-0627-4440-96cc-b4fa733de498
	  Boot ID:                    5f85edc3-8197-4198-8ad4-bcedfe67fdcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-ssskq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-880490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-880490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-880490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-566n8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-880490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-4v8px               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-880490 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-880490 event: Registered Node embed-certs-880490 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-880490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-880490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-880490 event: Registered Node embed-certs-880490 in Controller
	
	
	==> dmesg <==
	[Sep14 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053594] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep14 01:03] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.900225] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.602607] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.084326] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.062362] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053727] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.184180] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.155415] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.301696] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.110246] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +2.388376] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.062818] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.568256] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.902715] systemd-fstab-generator[1544]: Ignoring "noauto" option for root device
	[  +3.760551] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.158200] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] <==
	{"level":"info","ts":"2024-09-14T01:03:23.316422Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d113b8292a777974","local-member-attributes":"{Name:embed-certs-880490 ClientURLs:[https://192.168.50.105:2379]}","request-path":"/0/members/d113b8292a777974/attributes","cluster-id":"2dbe9e3b76acd0e0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:03:23.316421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:23.316579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:23.317645Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:23.317928Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:23.318458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.105:2379"}
	{"level":"info","ts":"2024-09-14T01:03:23.318586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:23.318610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:23.318770Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:03:38.178160Z","caller":"traceutil/trace.go:171","msg":"trace[75736895] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"144.655686ms","start":"2024-09-14T01:03:38.033472Z","end":"2024-09-14T01:03:38.178128Z","steps":["trace[75736895] 'read index received'  (duration: 144.4297ms)","trace[75736895] 'applied index is now lower than readState.Index'  (duration: 225.411µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T01:03:38.178352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.856802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-880490\" ","response":"range_response_count:1 size:5859"}
	{"level":"info","ts":"2024-09-14T01:03:38.178434Z","caller":"traceutil/trace.go:171","msg":"trace[975723025] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-880490; range_end:; response_count:1; response_revision:618; }","duration":"144.971035ms","start":"2024-09-14T01:03:38.033447Z","end":"2024-09-14T01:03:38.178418Z","steps":["trace[975723025] 'agreement among raft nodes before linearized reading'  (duration: 144.824813ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:03:38.178535Z","caller":"traceutil/trace.go:171","msg":"trace[225524194] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"470.330195ms","start":"2024-09-14T01:03:37.708192Z","end":"2024-09-14T01:03:38.178522Z","steps":["trace[225524194] 'process raft request'  (duration: 469.799007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T01:03:38.179149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:03:37.708176Z","time spent":"470.409338ms","remote":"127.0.0.1:46026","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:501 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2501 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >"}
	{"level":"warn","ts":"2024-09-14T01:03:38.398802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.083855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-880490\" ","response":"range_response_count:1 size:5487"}
	{"level":"info","ts":"2024-09-14T01:03:38.398905Z","caller":"traceutil/trace.go:171","msg":"trace[741603394] range","detail":"{range_begin:/registry/minions/embed-certs-880490; range_end:; response_count:1; response_revision:618; }","duration":"217.217608ms","start":"2024-09-14T01:03:38.181676Z","end":"2024-09-14T01:03:38.398893Z","steps":["trace[741603394] 'range keys from in-memory index tree'  (duration: 216.980027ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:13:23.351694Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":863}
	{"level":"info","ts":"2024-09-14T01:13:23.361836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":863,"took":"9.547241ms","hash":2038378596,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2670592,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-14T01:13:23.362014Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2038378596,"revision":863,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T01:18:23.361606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2024-09-14T01:18:23.365333Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1106,"took":"3.370204ms","hash":3166876004,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-14T01:18:23.365388Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3166876004,"revision":1106,"compact-revision":863}
	{"level":"info","ts":"2024-09-14T01:23:23.370467Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1350}
	{"level":"info","ts":"2024-09-14T01:23:23.374580Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1350,"took":"3.643432ms","hash":260170158,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-14T01:23:23.374643Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":260170158,"revision":1350,"compact-revision":1106}
	
	
	==> kernel <==
	 01:25:59 up 23 min,  0 users,  load average: 0.06, 0.10, 0.09
	Linux embed-certs-880490 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] <==
	I0914 01:21:25.746416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:21:25.746487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:23:24.745904       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:24.746219       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:23:25.747719       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:25.747937       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 01:23:25.747772       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:25.748151       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:23:25.749252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:23:25.749320       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:24:25.749382       1 handler_proxy.go:99] no RequestInfo found in the context
	W0914 01:24:25.749426       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:24:25.749596       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0914 01:24:25.749609       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:24:25.750737       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:24:25.750803       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] <==
	E0914 01:20:58.439366       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:20:58.939723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:28.446302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:28.947050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:58.451965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:58.954546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:28.458268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:28.962360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:58.464280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:58.969703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:23:28.470820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:23:28.978332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:23:58.477134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:23:58.985573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:24:23.159903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-880490"
	E0914 01:24:28.483929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:24:28.993419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:24:33.610295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="181.878µs"
	I0914 01:24:44.609636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="105.607µs"
	E0914 01:24:58.489965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:24:59.000647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:25:28.496043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:25:29.008250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:25:58.502394       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:25:59.014985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:03:26.330974       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:03:26.345943       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.105"]
	E0914 01:03:26.347021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:03:26.423643       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:03:26.423682       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:03:26.423739       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:03:26.428048       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:03:26.428377       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:03:26.428401       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:26.430394       1 config.go:199] "Starting service config controller"
	I0914 01:03:26.430439       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:03:26.430471       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:03:26.430489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:03:26.433656       1 config.go:328] "Starting node config controller"
	I0914 01:03:26.433732       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:03:26.531053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:03:26.531170       1 shared_informer.go:320] Caches are synced for service config
	I0914 01:03:26.534116       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] <==
	I0914 01:03:22.365568       1 serving.go:386] Generated self-signed cert in-memory
	W0914 01:03:24.676135       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:03:24.676184       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:03:24.676203       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:03:24.676209       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:03:24.733021       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 01:03:24.733068       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:24.735216       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 01:03:24.735257       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:03:24.735937       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 01:03:24.736015       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 01:03:24.836057       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:24:44 embed-certs-880490 kubelet[915]: E0914 01:24:44.592470     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:24:49 embed-certs-880490 kubelet[915]: E0914 01:24:49.913201     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277089912704456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:24:49 embed-certs-880490 kubelet[915]: E0914 01:24:49.913245     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277089912704456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:24:57 embed-certs-880490 kubelet[915]: E0914 01:24:57.592942     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:24:59 embed-certs-880490 kubelet[915]: E0914 01:24:59.915523     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277099915136991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:24:59 embed-certs-880490 kubelet[915]: E0914 01:24:59.915974     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277099915136991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:09 embed-certs-880490 kubelet[915]: E0914 01:25:09.917483     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277109917103444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:09 embed-certs-880490 kubelet[915]: E0914 01:25:09.917526     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277109917103444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:10 embed-certs-880490 kubelet[915]: E0914 01:25:10.592825     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]: E0914 01:25:19.608506     915 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]: E0914 01:25:19.919082     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277119918701220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:19 embed-certs-880490 kubelet[915]: E0914 01:25:19.919118     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277119918701220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:23 embed-certs-880490 kubelet[915]: E0914 01:25:23.593776     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:25:29 embed-certs-880490 kubelet[915]: E0914 01:25:29.921372     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277129921001676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:29 embed-certs-880490 kubelet[915]: E0914 01:25:29.921653     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277129921001676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:38 embed-certs-880490 kubelet[915]: E0914 01:25:38.592276     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	Sep 14 01:25:39 embed-certs-880490 kubelet[915]: E0914 01:25:39.923478     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277139922924995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:39 embed-certs-880490 kubelet[915]: E0914 01:25:39.923768     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277139922924995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:49 embed-certs-880490 kubelet[915]: E0914 01:25:49.927827     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277149925266202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:49 embed-certs-880490 kubelet[915]: E0914 01:25:49.928427     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277149925266202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:25:50 embed-certs-880490 kubelet[915]: E0914 01:25:50.592415     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v8px" podUID="e291b7c4-a9b2-4715-9d78-926618e87877"
	
	
	==> storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] <==
	I0914 01:03:56.873195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:03:56.883366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:03:56.883428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:04:14.283270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:04:14.283486       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2!
	I0914 01:04:14.283590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7fe8bb4-cb91-41b4-90e8-cd5d59913cd9", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2 became leader
	I0914 01:04:14.384591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-880490_ddf3e500-7564-4409-b5e4-032b75313db2!
	
	
	==> storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] <==
	I0914 01:03:26.371505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 01:03:56.380497       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-880490 -n embed-certs-880490
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-880490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4v8px
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px: exit status 1 (60.951044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4v8px" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-880490 describe pod metrics-server-6867b74b74-4v8px: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (384.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-057857 -n no-preload-057857
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:23:21.83715762 +0000 UTC m=+7016.264802823
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-057857 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-057857 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.107µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-057857 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-057857 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-057857 logs -n 25: (2.122854441s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 01:22 UTC | 14 Sep 24 01:22 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.406394241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277003406371781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=359349bf-fd25-4b76-98dc-50ff84de660f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.407139168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc56582e-af36-436e-aa2d-d2e9d84d581e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.407212485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc56582e-af36-436e-aa2d-d2e9d84d581e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.407529097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc56582e-af36-436e-aa2d-d2e9d84d581e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.441709227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3d14c85-df52-438a-b65b-e4817daad36f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.441785113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3d14c85-df52-438a-b65b-e4817daad36f name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.442895697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=190ae168-0c0c-47c1-bf7f-a5e923dcf07f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.443247052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277003443225387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=190ae168-0c0c-47c1-bf7f-a5e923dcf07f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.443765907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fb3ff87-8741-4420-956a-f2b9d99862e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.443865435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fb3ff87-8741-4420-956a-f2b9d99862e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.444140488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fb3ff87-8741-4420-956a-f2b9d99862e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.478562722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d67301ca-a79c-4445-a9fa-80fcfec65bb7 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.478636468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d67301ca-a79c-4445-a9fa-80fcfec65bb7 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.479637204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fbf758a-f154-4723-bd93-57e02f28f4b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.480033659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277003480008399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fbf758a-f154-4723-bd93-57e02f28f4b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.480489363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c428afaf-3444-4c84-8e40-9b49e568d49b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.480560298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c428afaf-3444-4c84-8e40-9b49e568d49b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.480799121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c428afaf-3444-4c84-8e40-9b49e568d49b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.511903483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5de07d42-e012-4009-8831-c37837593bee name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.511991208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5de07d42-e012-4009-8831-c37837593bee name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.513148902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8621893-7714-4297-8bbb-f8a351aaa0cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.513993377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277003513875719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8621893-7714-4297-8bbb-f8a351aaa0cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.515296016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f579ea09-5af9-474b-a275-7c7e3e8383fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.515408381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f579ea09-5af9-474b-a275-7c7e3e8383fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:23 no-preload-057857 crio[707]: time="2024-09-14 01:23:23.515654603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f,PodSandboxId:fb24127a7326e00e96b2bf69973d9633a1c99bb1ef41be521c8faaf6eb393b59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071491583735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jqk6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef11f33-25b0-4b58-bbea-4cd43f02955c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af,PodSandboxId:1cdcee7c292ebbd96f884295d8bc76f14f14ee893046950d060b6738cb057c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726276071376463139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52vdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c6d8bc35-9a11-4903-a681-767cf3584d68,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce,PodSandboxId:8a439c2545ee78a05ebb41d796d05ecd83cb43befd10f3246fea398961a72548,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1726276070582983981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05866937-f16f-4aea-bf2d-3e6d644a5fa7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2,PodSandboxId:d865833a3ef1345266428f990c96dc40dd07a2647e610b8f3514c6614117ca98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726276069784675679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6d75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8d2b77d-820d-4a2e-ab4e-83909c0e1382,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4,PodSandboxId:25514cd4c9cece22a67d738b18fcfac793e36fd8e60df01d7f8106097907cee8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726276058827754843,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e,PodSandboxId:ac4823a25b9648dfa12bd4d9c6ad2c062b5c92b85c561329fdfa4dae07159393,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726276058819553396,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79a931f08846923d11460c64d99eb58a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136,PodSandboxId:b5cd553992fd1497454d346aeb42f063ea35debd359a77c4bd8a03ca7ab914cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726276058832503600,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2137bec0a278efd053ce1af1b781cb7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489,PodSandboxId:7b7b7b9b2be1789a5b61d75f4c928de4b15cb2463e795285b1277e5e4f1f411a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726276058757501630,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c506906acb54a86d11c045acdfea675,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417,PodSandboxId:52f5718e13ae1f7c88fcda11b0e3820eedafb874520651cfd9fb5ab9e15cbf65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726275771315928687,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-057857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d321a6118d985d00a4078bca4e51eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f579ea09-5af9-474b-a275-7c7e3e8383fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d9d3a688e481       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   fb24127a7326e       coredns-7c65d6cfc9-jqk6k
	8a8da47be06ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   1cdcee7c292eb       coredns-7c65d6cfc9-52vdb
	8daf98a703f89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   8a439c2545ee7       storage-provisioner
	dd7bb23d93588       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   d865833a3ef13       kube-proxy-m6d75
	6e6a8583ab886       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   b5cd553992fd1       kube-scheduler-no-preload-057857
	51a277db64b96       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   25514cd4c9cec       kube-apiserver-no-preload-057857
	5267b5229d2c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   ac4823a25b964       etcd-no-preload-057857
	1b84acc249655       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   7b7b7b9b2be17       kube-controller-manager-no-preload-057857
	5ed647f42f39c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   52f5718e13ae1       kube-apiserver-no-preload-057857
	
	
	==> coredns [7d9d3a688e48114abea52bd26f885fa204c56169569390689dfa1e463ecfd99f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8a8da47be06ba3b5c8235263281b79d6df20b29ab22d7c9354247788115386af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-057857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-057857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=no-preload-057857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 01:07:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-057857
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:23:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:23:11 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:23:11 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:23:11 +0000   Sat, 14 Sep 2024 01:07:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:23:11 +0000   Sat, 14 Sep 2024 01:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    no-preload-057857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef9e0c97a2104446af328f45caad6a6f
	  System UUID:                ef9e0c97-a210-4446-af32-8f45caad6a6f
	  Boot ID:                    914bc9f4-9209-4c8f-8750-74d7cb6ca8e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-52vdb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-jqk6k                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-no-preload-057857                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-no-preload-057857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-no-preload-057857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-m6d75                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-no-preload-057857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-d78nt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node no-preload-057857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node no-preload-057857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node no-preload-057857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-057857 event: Registered Node no-preload-057857 in Controller
	
	
	==> dmesg <==
	[  +0.050817] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769878] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.913723] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.532564] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.693656] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.066900] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058408] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.195235] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.128298] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.295720] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.452579] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.059300] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.967947] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.424083] kauditd_printk_skb: 97 callbacks suppressed
	[Sep14 01:03] kauditd_printk_skb: 86 callbacks suppressed
	[Sep14 01:07] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.897177] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +4.692564] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.893055] systemd-fstab-generator[3330]: Ignoring "noauto" option for root device
	[  +4.911694] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +0.134125] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.403521] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5267b5229d2c08c4ccadba102bb936f1c1134f82e70277a6c6468e7b5f8e0c5e] <==
	{"level":"info","ts":"2024-09-14T01:07:39.565092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T01:07:39.565129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 1"}
	{"level":"info","ts":"2024-09-14T01:07:39.565236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.565401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-09-14T01:07:39.570165Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.575027Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:no-preload-057857 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:07:39.575076Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:07:39.575157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:07:39.580752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:07:39.596622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:07:39.596662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:07:39.586309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.599929Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.591244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:07:39.600773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:07:39.600927Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T01:07:39.606069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-09-14T01:17:39.698749Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-09-14T01:17:39.707532Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"8.344287ms","hash":3977222850,"current-db-size-bytes":2101248,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-09-14T01:17:39.707597Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3977222850,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T01:22:39.710492Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-09-14T01:22:39.714728Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"3.71719ms","hash":503936282,"current-db-size-bytes":2101248,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1544192,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-14T01:22:39.714804Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":503936282,"revision":932,"compact-revision":689}
	
	
	==> kernel <==
	 01:23:23 up 21 min,  0 users,  load average: 0.26, 0.25, 0.18
	Linux no-preload-057857 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [51a277db64b96c273b8baa74363ef03c178a63c50cbaf518c8165971a29bf3e4] <==
	I0914 01:18:42.503376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:18:42.503441       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:20:42.504419       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:20:42.504581       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 01:20:42.504703       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:20:42.504866       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:20:42.505759       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:20:42.506989       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:22:41.504168       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:22:41.504929       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:22:42.507569       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:22:42.507733       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:22:42.507770       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:22:42.507880       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 01:22:42.509015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:22:42.509062       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [5ed647f42f39cdcf15a603c2ca329b9a35795a47e345a6b400ab5d64cbe99417] <==
	W0914 01:07:31.206059       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.209467       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.212972       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.266408       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.307778       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.430114       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.441918       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.534297       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.534298       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.562493       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.571114       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.573590       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.584032       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.594230       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.595473       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.598862       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.645381       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.724543       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.783121       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.790685       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.843139       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.862096       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:31.884109       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:32.034517       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 01:07:32.201599       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1b84acc249655ba746eb277e613dd7d48498083e7501bc0e2f8bac5bd687f489] <==
	E0914 01:18:18.449548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:18:18.994080       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:18:48.456380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:18:49.004022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:19:00.410053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="169.859µs"
	I0914 01:19:11.404073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="108.418µs"
	E0914 01:19:18.464072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:19:19.012265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:19:48.470405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:19:49.019877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:20:18.476690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:20:19.028446       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:20:48.484531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:20:49.037150       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:18.491196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:19.046037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:48.499303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:49.054919       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:18.505601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:19.062710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:48.513063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:49.072742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:23:11.925677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-057857"
	E0914 01:23:18.519728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:23:19.080603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [dd7bb23d935884b0f9a278e8e39ae5bb958b13e952751fbc939da3a32e7630d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:07:50.123393       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:07:50.142609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0914 01:07:50.142698       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:07:50.206277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:07:50.206321       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:07:50.206349       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:07:50.208888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:07:50.209179       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:07:50.209207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:07:50.210449       1 config.go:199] "Starting service config controller"
	I0914 01:07:50.210488       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:07:50.210512       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:07:50.210528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:07:50.211113       1 config.go:328] "Starting node config controller"
	I0914 01:07:50.211137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:07:50.311743       1 shared_informer.go:320] Caches are synced for service config
	I0914 01:07:50.311803       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:07:50.313659       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6e6a8583ab88698ddd808acecc37e1d9b3157cd04b156883c8a19bfa595b3136] <==
	W0914 01:07:42.392720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 01:07:42.393336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.394488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 01:07:42.395872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.409148       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 01:07:42.409675       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 01:07:42.478104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 01:07:42.478249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.495402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 01:07:42.495651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.530095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 01:07:42.530215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.554227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.554355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.593126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.593191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.651438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.651659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.709346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 01:07:42.709415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.791901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 01:07:42.791951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:07:42.792797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:42.792865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 01:07:44.414058       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:22:12 no-preload-057857 kubelet[3337]: E0914 01:22:12.390171    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:22:14 no-preload-057857 kubelet[3337]: E0914 01:22:14.648973    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276934648564362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:14 no-preload-057857 kubelet[3337]: E0914 01:22:14.649366    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276934648564362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:23 no-preload-057857 kubelet[3337]: E0914 01:22:23.390168    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:22:24 no-preload-057857 kubelet[3337]: E0914 01:22:24.651694    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276944651288160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:24 no-preload-057857 kubelet[3337]: E0914 01:22:24.651732    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276944651288160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:34 no-preload-057857 kubelet[3337]: E0914 01:22:34.653290    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276954652730813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:34 no-preload-057857 kubelet[3337]: E0914 01:22:34.653335    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276954652730813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:38 no-preload-057857 kubelet[3337]: E0914 01:22:38.390444    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]: E0914 01:22:44.407084    3337 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]: E0914 01:22:44.655036    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276964654688078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:44 no-preload-057857 kubelet[3337]: E0914 01:22:44.655093    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276964654688078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:53 no-preload-057857 kubelet[3337]: E0914 01:22:53.389525    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:22:54 no-preload-057857 kubelet[3337]: E0914 01:22:54.658026    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276974657495457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:54 no-preload-057857 kubelet[3337]: E0914 01:22:54.658452    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276974657495457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:04 no-preload-057857 kubelet[3337]: E0914 01:23:04.660765    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276984660152271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:04 no-preload-057857 kubelet[3337]: E0914 01:23:04.661166    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276984660152271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:05 no-preload-057857 kubelet[3337]: E0914 01:23:05.390162    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	Sep 14 01:23:14 no-preload-057857 kubelet[3337]: E0914 01:23:14.663675    3337 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276994663098205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:14 no-preload-057857 kubelet[3337]: E0914 01:23:14.664154    3337 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276994663098205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:17 no-preload-057857 kubelet[3337]: E0914 01:23:17.390387    3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d78nt" podUID="5f77cfda-f8e2-4b08-8050-473c500f7504"
	
	
	==> storage-provisioner [8daf98a703f8967095269cbd5ce06fac27fcfa52e66f2e89292255aa09eb7dce] <==
	I0914 01:07:50.705551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:07:50.715738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:07:50.715798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:07:50.732764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:07:50.733417       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b568b778-0489-476e-97d6-3d355719ba43", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc became leader
	I0914 01:07:50.735992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc!
	I0914 01:07:50.836835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-057857_dfab0f72-8b30-4b2b-ae5d-6c1c1adc97fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-057857 -n no-preload-057857
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-057857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d78nt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt: exit status 1 (66.385475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d78nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-057857 describe pod metrics-server-6867b74b74-d78nt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (384.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (391.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 01:23:43.059804547 +0000 UTC m=+7037.487449742
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.12µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-754332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-754332 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-754332 logs -n 25: (2.099748882s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 01:22 UTC | 14 Sep 24 01:22 UTC |
	| delete  | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 01:23 UTC | 14 Sep 24 01:23 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.593985668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277024593961383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eac35a4-4a64-4c45-9e65-1e4d946ab3d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.594586461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f107566-ef6c-4379-8f40-6262d55c8478 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.594669314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f107566-ef6c-4379-8f40-6262d55c8478 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.594884760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f107566-ef6c-4379-8f40-6262d55c8478 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.632236906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1a04931-415d-45c7-b73c-b78960d278a0 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.632316705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1a04931-415d-45c7-b73c-b78960d278a0 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.633253534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ac887ff-561a-4dfb-8a71-dd695adee2dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.633705944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277024633683083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ac887ff-561a-4dfb-8a71-dd695adee2dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.634150800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dc551c3-21e9-4bfb-8963-30fff311ddaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.634217498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dc551c3-21e9-4bfb-8963-30fff311ddaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.634447282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dc551c3-21e9-4bfb-8963-30fff311ddaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.667830548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=570a3904-82a1-4508-b016-d6875ce985b6 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.667916839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=570a3904-82a1-4508-b016-d6875ce985b6 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.669001047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d62ca11-7a1b-4880-94c9-b51c91562ef3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.669452363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277024669376286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d62ca11-7a1b-4880-94c9-b51c91562ef3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.670007263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e82df5d9-eb42-4455-a220-0bf696e2bd9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.670070624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e82df5d9-eb42-4455-a220-0bf696e2bd9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.670274305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e82df5d9-eb42-4455-a220-0bf696e2bd9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.702142895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2f77641-f53f-4720-afe9-29ee01ffff33 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.702229212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2f77641-f53f-4720-afe9-29ee01ffff33 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.703917326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca6f5d0a-d946-4938-aeae-dfaf16e7de5c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.704333453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277024704312006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca6f5d0a-d946-4938-aeae-dfaf16e7de5c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.704845709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3315c82b-861b-4e4a-9576-8ad4b4e64eb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.704911936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3315c82b-861b-4e4a-9576-8ad4b4e64eb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:23:44 default-k8s-diff-port-754332 crio[712]: time="2024-09-14 01:23:44.705109843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726275854128142519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d223f20a7b200ee6e9564f7ec93e80e4984effa54f90a279e6f65fa53448cbb1,PodSandboxId:dbae3367a6ac4a86dc1d82be70eddfca5827de8df04be06409a97d1ddab0a0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726275833962801419,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f69c0db5-0c45-4cca-97bd-61c6f289bc84,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84,PodSandboxId:49efd604c9284d1b6679997d62b5de73781e9450e914aeec1da56041f0e879bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726275830962524962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lgsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f49f1-166e-49bf-9309-f74e9f0cf99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1,PodSandboxId:bc1ec75378bb33467402be3f5d0c339547917f4706cbdf6e3f6bc523ce2e1086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726275823305214337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9qhk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b57a730-4
1c0-448b-b566-16581db6996c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c,PodSandboxId:367349f1256497b569a8f9f724327e7b9b91ae3d90c18f1ae046a67866d844fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726275823273273844,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e85d21-ed6c-4c14-9528
-6f9986aa1d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a,PodSandboxId:d48bd95d30495c77856becd8a7088f6e0ac953927f5c30973892986774f3ad1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726275819595185241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0acbfb1f9d859b754197603968e7a42,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06,PodSandboxId:cf79173828d05fdf50eb84f40b0f9b6b8dc5398a5aab8b7185f3caef5e83c0d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726275819593253538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db69b408191c16e1b8b7c9859b
eb150f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295,PodSandboxId:13002c9acc747670cc87eaf55f6da027c5a5454242c9d517d9dd0fa53d25c19b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726275819573881898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfcd404c5e3b04d88ec1538e5b30
b2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d,PodSandboxId:7b7534afde50091e5cd8a0318615dd784de9c9793d347db20cc129d55a39ef4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726275819579877518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-754332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6a5cd09666351bf93f50cb0cce65
0e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3315c82b-861b-4e4a-9576-8ad4b4e64eb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd70c0b225453       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   367349f125649       storage-provisioner
	d223f20a7b200       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   dbae3367a6ac4       busybox
	eed5d3016c514       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   49efd604c9284       coredns-7c65d6cfc9-5lgsh
	a208a2f3609d0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   bc1ec75378bb3       kube-proxy-f9qhk
	6342974eea142       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   367349f125649       storage-provisioner
	6234a7bcd6d95       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   d48bd95d30495       etcd-default-k8s-diff-port-754332
	b88f0f70ed0bd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   cf79173828d05       kube-controller-manager-default-k8s-diff-port-754332
	e409487833e23       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   7b7534afde500       kube-scheduler-default-k8s-diff-port-754332
	38c2a1c006d77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   13002c9acc747       kube-apiserver-default-k8s-diff-port-754332
	
	
	==> coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53310 - 37392 "HINFO IN 1613091291824127344.2356255575009687738. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013987659s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-754332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-754332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=default-k8s-diff-port-754332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_54_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:54:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-754332
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:23:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:19:32 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:19:32 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:19:32 +0000   Sat, 14 Sep 2024 00:54:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:19:32 +0000   Sat, 14 Sep 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.203
	  Hostname:    default-k8s-diff-port-754332
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8e2a2e1f8984b5881f3db0787376198
	  System UUID:                b8e2a2e1-f898-4b58-81f3-db0787376198
	  Boot ID:                    ad514a84-2928-48e5-84c0-914dfa6e7281
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-5lgsh                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-754332                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-754332             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-754332    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-f9qhk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-754332             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-lxzvw                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-754332 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-754332 event: Registered Node default-k8s-diff-port-754332 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-754332 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-754332 event: Registered Node default-k8s-diff-port-754332 in Controller
	
	
	==> dmesg <==
	[Sep14 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056929] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039235] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910810] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.970690] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571009] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.257465] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.063447] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.166845] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.137655] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.314761] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.049921] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.674591] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +0.065160] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.502941] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.437536] systemd-fstab-generator[1544]: Ignoring "noauto" option for root device
	[  +3.271496] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.076296] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] <==
	{"level":"info","ts":"2024-09-14T01:03:41.073521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:41.073548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgPreVoteResp from fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-09-14T01:03:41.073560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgVoteResp from fd1c782511c6d1a at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became leader at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.073581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fd1c782511c6d1a elected leader fd1c782511c6d1a at term 3"}
	{"level":"info","ts":"2024-09-14T01:03:41.075959Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fd1c782511c6d1a","local-member-attributes":"{Name:default-k8s-diff-port-754332 ClientURLs:[https://192.168.72.203:2379]}","request-path":"/0/members/fd1c782511c6d1a/attributes","cluster-id":"e420fb3f9edbaec1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T01:03:41.076122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:41.076503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T01:03:41.077253Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:41.078236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.203:2379"}
	{"level":"info","ts":"2024-09-14T01:03:41.078324Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:41.078365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T01:03:41.079039Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T01:03:41.081193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T01:03:56.622062Z","caller":"traceutil/trace.go:171","msg":"trace[1274899830] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"111.416158ms","start":"2024-09-14T01:03:56.510623Z","end":"2024-09-14T01:03:56.622039Z","steps":["trace[1274899830] 'process raft request'  (duration: 111.313363ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T01:13:41.129959Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-09-14T01:13:41.139880Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"9.203053ms","hash":698923900,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2842624,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-14T01:13:41.139957Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":698923900,"revision":865,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T01:18:41.139650Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1107}
	{"level":"info","ts":"2024-09-14T01:18:41.143509Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1107,"took":"3.412374ms","hash":981372193,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1687552,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-14T01:18:41.143567Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":981372193,"revision":1107,"compact-revision":865}
	{"level":"info","ts":"2024-09-14T01:23:41.151234Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1350}
	{"level":"info","ts":"2024-09-14T01:23:41.154258Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1350,"took":"2.69836ms","hash":1116791406,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-14T01:23:41.154310Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1116791406,"revision":1350,"compact-revision":1107}
	
	
	==> kernel <==
	 01:23:45 up 20 min,  0 users,  load average: 0.10, 0.10, 0.09
	Linux default-k8s-diff-port-754332 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] <==
	I0914 01:19:43.458440       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:19:43.459503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:21:43.459200       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:21:43.459582       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 01:21:43.459692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:21:43.459767       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:21:43.460777       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:21:43.460875       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 01:23:42.458327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:42.458482       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 01:23:43.460758       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:43.460808       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 01:23:43.460865       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 01:23:43.460927       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 01:23:43.462060       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:23:43.462105       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] <==
	E0914 01:18:16.210750       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:18:16.709656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:18:46.217002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:18:46.717007       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:19:16.223559       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:19:16.724341       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:19:32.252544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-754332"
	E0914 01:19:46.230526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:19:46.731629       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 01:19:52.918032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="240.532µs"
	I0914 01:20:03.931364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="120.556µs"
	E0914 01:20:16.236270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:20:16.738632       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:20:46.243713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:20:46.747657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:16.250267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:16.756155       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:21:46.256877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:21:46.766332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:16.263027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:16.774521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:22:46.270078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:22:46.782752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 01:23:16.275707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 01:23:16.790279       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 01:03:43.597133       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 01:03:43.611220       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.203"]
	E0914 01:03:43.613473       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:03:43.668084       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 01:03:43.668134       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 01:03:43.668165       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:03:43.670736       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:03:43.671039       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:03:43.671053       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:43.673291       1 config.go:199] "Starting service config controller"
	I0914 01:03:43.673377       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:03:43.674166       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:03:43.679876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:03:43.679954       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 01:03:43.676536       1 config.go:328] "Starting node config controller"
	I0914 01:03:43.680006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:03:43.780260       1 shared_informer.go:320] Caches are synced for node config
	I0914 01:03:43.780292       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] <==
	I0914 01:03:40.535118       1 serving.go:386] Generated self-signed cert in-memory
	W0914 01:03:42.375967       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:03:42.376486       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:03:42.376546       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:03:42.376570       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:03:42.471380       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 01:03:42.471657       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:03:42.484140       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 01:03:42.484294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 01:03:42.484328       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:03:42.484346       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 01:03:42.584788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 01:22:37 default-k8s-diff-port-754332 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:22:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:38.224356     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276958224029424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:38.224441     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276958224029424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:45 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:45.906587     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:22:48 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:48.226649     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276968226192285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:48 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:48.226677     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276968226192285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:58 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:58.229226     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276978228686080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:22:58 default-k8s-diff-port-754332 kubelet[921]: E0914 01:22:58.229273     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276978228686080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:00 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:00.903723     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:23:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:08.230905     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276988230619465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:08 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:08.230941     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276988230619465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:12 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:12.903875     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:23:18 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:18.232852     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276998232331589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:18 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:18.233174     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276998232331589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:26 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:26.903864     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	Sep 14 01:23:28 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:28.235728     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277008235333592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:28 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:28.236058     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277008235333592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:37 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:37.937098     921 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 01:23:37 default-k8s-diff-port-754332 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 01:23:37 default-k8s-diff-port-754332 kubelet[921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 01:23:37 default-k8s-diff-port-754332 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 01:23:37 default-k8s-diff-port-754332 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 01:23:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:38.238242     921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277018237849432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:38.238283     921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726277018237849432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:23:38 default-k8s-diff-port-754332 kubelet[921]: E0914 01:23:38.903077     921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lxzvw" podUID="cc0df995-8084-4f3e-92b2-0268d571ed1c"
	
	
	==> storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] <==
	I0914 01:03:43.429975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 01:04:13.446182       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] <==
	I0914 01:04:14.225116       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:04:14.236318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:04:14.236459       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:04:31.637423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:04:31.637745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd!
	I0914 01:04:31.638016       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96e3f175-2c30-4e03-b51a-193762063bcd", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd became leader
	I0914 01:04:31.738117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-754332_88cfcfd5-a7c3-4411-8741-4588497658bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lxzvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw: exit status 1 (62.467195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lxzvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-754332 describe pod metrics-server-6867b74b74-lxzvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (391.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:20:11.672287   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:20:23.696705   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:21:25.781886   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:21:58.155920   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:22:06.865538   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
E0914 01:22:20.623734   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.116:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.116:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (234.150002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-431084" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-431084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-431084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.13µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-431084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (224.550101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-431084 logs -n 25: (1.626892821s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-617306             | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-057857             | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617306                  | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617306 --memory=2200 --alsologtostderr   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-617306 image list                           | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:55 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p newest-cni-617306                                   | newest-cni-617306            | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-817727 | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-817727                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:56 UTC | 14 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-880490            | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-431084        | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-754332       | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-754332 | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-754332                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-057857                  | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-057857                                   | no-preload-057857            | jenkins | v1.34.0 | 14 Sep 24 00:57 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-431084             | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC | 14 Sep 24 00:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-431084                              | old-k8s-version-431084       | jenkins | v1.34.0 | 14 Sep 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-880490                 | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-880490                                  | embed-certs-880490           | jenkins | v1.34.0 | 14 Sep 24 00:59 UTC | 14 Sep 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:59:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:59:36.918305   74318 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:59:36.918417   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918427   74318 out.go:358] Setting ErrFile to fd 2...
	I0914 00:59:36.918432   74318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:59:36.918626   74318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:59:36.919177   74318 out.go:352] Setting JSON to false
	I0914 00:59:36.920153   74318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6123,"bootTime":1726269454,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:59:36.920246   74318 start.go:139] virtualization: kvm guest
	I0914 00:59:36.922025   74318 out.go:177] * [embed-certs-880490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:59:36.922988   74318 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:59:36.923003   74318 notify.go:220] Checking for updates...
	I0914 00:59:36.924913   74318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:59:36.926042   74318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:59:36.927032   74318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:59:36.927933   74318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:59:36.928988   74318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:59:36.930868   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:59:36.931416   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.931473   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.946847   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 00:59:36.947321   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.947874   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.947897   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.948255   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.948441   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.948663   74318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:59:36.948956   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:59:36.948986   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:59:36.964009   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0914 00:59:36.964498   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:59:36.965024   74318 main.go:141] libmachine: Using API Version  1
	I0914 00:59:36.965048   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:59:36.965323   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:59:36.965548   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 00:59:36.998409   74318 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 00:59:36.999410   74318 start.go:297] selected driver: kvm2
	I0914 00:59:36.999423   74318 start.go:901] validating driver "kvm2" against &{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:36.999574   74318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:59:37.000299   74318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.000384   74318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 00:59:37.015477   74318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 00:59:37.015911   74318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:59:37.015947   74318 cni.go:84] Creating CNI manager for ""
	I0914 00:59:37.015990   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 00:59:37.016029   74318 start.go:340] cluster config:
	{Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:59:37.016128   74318 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:59:37.017763   74318 out.go:177] * Starting "embed-certs-880490" primary control-plane node in "embed-certs-880490" cluster
	I0914 00:59:33.260045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:36.332121   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:37.018867   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:59:37.018907   74318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 00:59:37.018916   74318 cache.go:56] Caching tarball of preloaded images
	I0914 00:59:37.018976   74318 preload.go:172] Found /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 00:59:37.018985   74318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:59:37.019065   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 00:59:37.019237   74318 start.go:360] acquireMachinesLock for embed-certs-880490: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 00:59:42.412042   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:45.484069   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:51.564101   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 00:59:54.636072   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:00.716106   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:03.788119   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:09.868039   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:12.940048   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:19.020003   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:22.092096   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:28.172094   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:31.244056   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:37.324085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:40.396043   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:46.476156   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:49.548045   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:55.628080   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:00:58.700085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:04.780062   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:07.852116   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:13.932064   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:17.004075   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:23.084093   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:26.156049   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:32.236079   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:35.308053   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:41.388104   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:44.460159   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:50.540085   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:53.612136   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:01:59.692071   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:02.764074   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:08.844082   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:11.916076   73455 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.203:22: connect: no route to host
	I0914 01:02:14.920471   73629 start.go:364] duration metric: took 4m22.596655718s to acquireMachinesLock for "no-preload-057857"
	I0914 01:02:14.920531   73629 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:14.920537   73629 fix.go:54] fixHost starting: 
	I0914 01:02:14.920891   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:14.920928   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:14.936191   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0914 01:02:14.936679   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:14.937164   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:02:14.937189   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:14.937716   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:14.937927   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:14.938130   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:02:14.939952   73629 fix.go:112] recreateIfNeeded on no-preload-057857: state=Stopped err=<nil>
	I0914 01:02:14.939975   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	W0914 01:02:14.940165   73629 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:14.942024   73629 out.go:177] * Restarting existing kvm2 VM for "no-preload-057857" ...
	I0914 01:02:14.943293   73629 main.go:141] libmachine: (no-preload-057857) Calling .Start
	I0914 01:02:14.943530   73629 main.go:141] libmachine: (no-preload-057857) Ensuring networks are active...
	I0914 01:02:14.944446   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network default is active
	I0914 01:02:14.944881   73629 main.go:141] libmachine: (no-preload-057857) Ensuring network mk-no-preload-057857 is active
	I0914 01:02:14.945275   73629 main.go:141] libmachine: (no-preload-057857) Getting domain xml...
	I0914 01:02:14.946063   73629 main.go:141] libmachine: (no-preload-057857) Creating domain...
	I0914 01:02:16.157615   73629 main.go:141] libmachine: (no-preload-057857) Waiting to get IP...
	I0914 01:02:16.158490   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.158879   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.158926   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.158851   74875 retry.go:31] will retry after 239.512255ms: waiting for machine to come up
	I0914 01:02:16.400454   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.400893   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.400925   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.400843   74875 retry.go:31] will retry after 256.530108ms: waiting for machine to come up
	I0914 01:02:16.659402   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:16.659884   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:16.659916   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:16.659840   74875 retry.go:31] will retry after 385.450667ms: waiting for machine to come up
	I0914 01:02:17.046366   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.046804   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.046828   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.046750   74875 retry.go:31] will retry after 598.323687ms: waiting for machine to come up
	I0914 01:02:14.917753   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:14.917808   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918137   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:02:14.918164   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:02:14.918414   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:02:14.920353   73455 machine.go:96] duration metric: took 4m37.418744273s to provisionDockerMachine
	I0914 01:02:14.920394   73455 fix.go:56] duration metric: took 4m37.442195157s for fixHost
	I0914 01:02:14.920401   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 4m37.442214s
	W0914 01:02:14.920423   73455 start.go:714] error starting host: provision: host is not running
	W0914 01:02:14.920510   73455 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 01:02:14.920518   73455 start.go:729] Will try again in 5 seconds ...
	I0914 01:02:17.646553   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:17.647012   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:17.647041   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:17.646960   74875 retry.go:31] will retry after 568.605601ms: waiting for machine to come up
	I0914 01:02:18.216828   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:18.217240   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:18.217257   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:18.217192   74875 retry.go:31] will retry after 825.650352ms: waiting for machine to come up
	I0914 01:02:19.044211   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.044531   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.044557   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.044498   74875 retry.go:31] will retry after 911.49902ms: waiting for machine to come up
	I0914 01:02:19.958142   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:19.958688   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:19.958718   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:19.958644   74875 retry.go:31] will retry after 1.139820217s: waiting for machine to come up
	I0914 01:02:21.100178   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:21.100750   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:21.100786   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:21.100670   74875 retry.go:31] will retry after 1.475229553s: waiting for machine to come up
	I0914 01:02:19.922076   73455 start.go:360] acquireMachinesLock for default-k8s-diff-port-754332: {Name:mk217f2566ba17084c34ea86e690e58b0176e948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 01:02:22.578205   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:22.578684   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:22.578706   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:22.578651   74875 retry.go:31] will retry after 1.77205437s: waiting for machine to come up
	I0914 01:02:24.353719   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:24.354208   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:24.354239   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:24.354151   74875 retry.go:31] will retry after 2.901022207s: waiting for machine to come up
	I0914 01:02:31.848563   74039 start.go:364] duration metric: took 3m32.830181981s to acquireMachinesLock for "old-k8s-version-431084"
	I0914 01:02:31.848615   74039 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:31.848621   74039 fix.go:54] fixHost starting: 
	I0914 01:02:31.849013   74039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:31.849065   74039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:31.866058   74039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0914 01:02:31.866554   74039 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:31.867084   74039 main.go:141] libmachine: Using API Version  1
	I0914 01:02:31.867112   74039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:31.867448   74039 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:31.867632   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:31.867760   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetState
	I0914 01:02:31.869372   74039 fix.go:112] recreateIfNeeded on old-k8s-version-431084: state=Stopped err=<nil>
	I0914 01:02:31.869413   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	W0914 01:02:31.869595   74039 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:31.871949   74039 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-431084" ...
	I0914 01:02:27.257732   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:27.258110   73629 main.go:141] libmachine: (no-preload-057857) DBG | unable to find current IP address of domain no-preload-057857 in network mk-no-preload-057857
	I0914 01:02:27.258139   73629 main.go:141] libmachine: (no-preload-057857) DBG | I0914 01:02:27.258042   74875 retry.go:31] will retry after 3.491816385s: waiting for machine to come up
	I0914 01:02:30.751096   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751599   73629 main.go:141] libmachine: (no-preload-057857) Found IP for machine: 192.168.39.129
	I0914 01:02:30.751625   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has current primary IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.751633   73629 main.go:141] libmachine: (no-preload-057857) Reserving static IP address...
	I0914 01:02:30.752145   73629 main.go:141] libmachine: (no-preload-057857) Reserved static IP address: 192.168.39.129
	I0914 01:02:30.752183   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.752208   73629 main.go:141] libmachine: (no-preload-057857) Waiting for SSH to be available...
	I0914 01:02:30.752238   73629 main.go:141] libmachine: (no-preload-057857) DBG | skip adding static IP to network mk-no-preload-057857 - found existing host DHCP lease matching {name: "no-preload-057857", mac: "52:54:00:12:57:32", ip: "192.168.39.129"}
	I0914 01:02:30.752250   73629 main.go:141] libmachine: (no-preload-057857) DBG | Getting to WaitForSSH function...
	I0914 01:02:30.754820   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755117   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.755145   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.755298   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH client type: external
	I0914 01:02:30.755319   73629 main.go:141] libmachine: (no-preload-057857) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa (-rw-------)
	I0914 01:02:30.755355   73629 main.go:141] libmachine: (no-preload-057857) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:30.755370   73629 main.go:141] libmachine: (no-preload-057857) DBG | About to run SSH command:
	I0914 01:02:30.755381   73629 main.go:141] libmachine: (no-preload-057857) DBG | exit 0
	I0914 01:02:30.875899   73629 main.go:141] libmachine: (no-preload-057857) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:30.876236   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetConfigRaw
	I0914 01:02:30.876974   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:30.879748   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880094   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.880133   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.880401   73629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/config.json ...
	I0914 01:02:30.880640   73629 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:30.880660   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:30.880860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.882993   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883327   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.883342   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.883527   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.883705   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.883855   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.884001   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.884170   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.884360   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.884370   73629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:30.983766   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:30.983823   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984082   73629 buildroot.go:166] provisioning hostname "no-preload-057857"
	I0914 01:02:30.984113   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:30.984294   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:30.987039   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987415   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:30.987437   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:30.987596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:30.987752   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.987983   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:30.988235   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:30.988436   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:30.988643   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:30.988657   73629 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-057857 && echo "no-preload-057857" | sudo tee /etc/hostname
	I0914 01:02:31.101235   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-057857
	
	I0914 01:02:31.101266   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.103950   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104162   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.104195   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.104393   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.104564   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104731   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.104860   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.104981   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.105158   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.105175   73629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-057857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-057857/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-057857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:31.212371   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:31.212401   73629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:31.212424   73629 buildroot.go:174] setting up certificates
	I0914 01:02:31.212434   73629 provision.go:84] configureAuth start
	I0914 01:02:31.212446   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetMachineName
	I0914 01:02:31.212707   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.215685   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216208   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.216246   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.216459   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.218430   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218779   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.218804   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.218922   73629 provision.go:143] copyHostCerts
	I0914 01:02:31.218984   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:31.218996   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:31.219081   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:31.219190   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:31.219201   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:31.219238   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:31.219330   73629 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:31.219339   73629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:31.219378   73629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:31.219457   73629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.no-preload-057857 san=[127.0.0.1 192.168.39.129 localhost minikube no-preload-057857]
	I0914 01:02:31.268985   73629 provision.go:177] copyRemoteCerts
	I0914 01:02:31.269068   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:31.269117   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.272403   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.272798   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.272833   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.273000   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.273195   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.273345   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.273517   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.353369   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:31.376097   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:02:31.397772   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:31.419002   73629 provision.go:87] duration metric: took 206.553772ms to configureAuth
	I0914 01:02:31.419032   73629 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:31.419191   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:31.419253   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.421810   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422110   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.422139   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.422252   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.422440   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422596   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.422724   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.422850   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.423004   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.423019   73629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:31.627692   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:31.627718   73629 machine.go:96] duration metric: took 747.063693ms to provisionDockerMachine
	I0914 01:02:31.627729   73629 start.go:293] postStartSetup for "no-preload-057857" (driver="kvm2")
	I0914 01:02:31.627738   73629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:31.627753   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.628053   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:31.628086   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.630619   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.630970   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.630996   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.631160   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.631334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.631510   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.631642   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.710107   73629 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:31.713975   73629 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:31.713998   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:31.714063   73629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:31.714135   73629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:31.714223   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:31.723175   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:31.746092   73629 start.go:296] duration metric: took 118.348122ms for postStartSetup
	I0914 01:02:31.746144   73629 fix.go:56] duration metric: took 16.825605717s for fixHost
	I0914 01:02:31.746169   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.748729   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749113   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.749144   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.749334   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.749570   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749712   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.749831   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.750046   73629 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:31.750241   73629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0914 01:02:31.750254   73629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:31.848382   73629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275751.820875726
	
	I0914 01:02:31.848406   73629 fix.go:216] guest clock: 1726275751.820875726
	I0914 01:02:31.848415   73629 fix.go:229] Guest: 2024-09-14 01:02:31.820875726 +0000 UTC Remote: 2024-09-14 01:02:31.746149785 +0000 UTC m=+279.567339149 (delta=74.725941ms)
	I0914 01:02:31.848438   73629 fix.go:200] guest clock delta is within tolerance: 74.725941ms
	I0914 01:02:31.848445   73629 start.go:83] releasing machines lock for "no-preload-057857", held for 16.92792955s
	I0914 01:02:31.848474   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.848755   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:31.851390   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.851842   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.851863   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.852204   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852727   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852881   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:02:31.852964   73629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:31.853017   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.853114   73629 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:31.853141   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:02:31.855696   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.855951   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856088   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856117   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856300   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856396   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:31.856432   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:31.856465   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856589   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:02:31.856636   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.856726   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:02:31.856793   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.856858   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:02:31.857012   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:02:31.958907   73629 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:31.966266   73629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:32.116145   73629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:32.121827   73629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:32.121901   73629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:32.137095   73629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:32.137122   73629 start.go:495] detecting cgroup driver to use...
	I0914 01:02:32.137194   73629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:32.152390   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:32.165725   73629 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:32.165784   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:32.180278   73629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:32.193859   73629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:32.309678   73629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:32.472823   73629 docker.go:233] disabling docker service ...
	I0914 01:02:32.472887   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:32.487243   73629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:32.503326   73629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:32.624869   73629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:32.754452   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:32.777844   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:32.796457   73629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:02:32.796540   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.807665   73629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:32.807728   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.818473   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.828673   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.838738   73629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:32.849119   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.859708   73629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.876628   73629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:32.888173   73629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:32.898936   73629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:32.898985   73629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:32.912867   73629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:32.922744   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:33.053822   73629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:33.146703   73629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:33.146766   73629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:33.152620   73629 start.go:563] Will wait 60s for crictl version
	I0914 01:02:33.152685   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.156297   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:33.191420   73629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:33.191514   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.220038   73629 ssh_runner.go:195] Run: crio --version
	I0914 01:02:33.247895   73629 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:02:31.873250   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .Start
	I0914 01:02:31.873462   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring networks are active...
	I0914 01:02:31.874359   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network default is active
	I0914 01:02:31.874749   74039 main.go:141] libmachine: (old-k8s-version-431084) Ensuring network mk-old-k8s-version-431084 is active
	I0914 01:02:31.875156   74039 main.go:141] libmachine: (old-k8s-version-431084) Getting domain xml...
	I0914 01:02:31.875828   74039 main.go:141] libmachine: (old-k8s-version-431084) Creating domain...
	I0914 01:02:33.207745   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting to get IP...
	I0914 01:02:33.208576   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.208959   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.209029   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.208936   75017 retry.go:31] will retry after 307.534052ms: waiting for machine to come up
	I0914 01:02:33.518255   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.518710   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.518734   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.518668   75017 retry.go:31] will retry after 378.523689ms: waiting for machine to come up
	I0914 01:02:33.899367   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:33.899835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:33.899861   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:33.899808   75017 retry.go:31] will retry after 327.128981ms: waiting for machine to come up
	I0914 01:02:33.249199   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetIP
	I0914 01:02:33.252353   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252709   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:02:33.252731   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:02:33.252923   73629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:33.256797   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:33.269489   73629 kubeadm.go:883] updating cluster {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:33.269597   73629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:02:33.269647   73629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:33.303845   73629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:02:33.303868   73629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:33.303929   73629 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.303951   73629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.303964   73629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.303985   73629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.304045   73629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.304096   73629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 01:02:33.304165   73629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.304043   73629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.305647   73629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.305648   73629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.305707   73629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.305751   73629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.305667   73629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 01:02:33.305703   73629 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:33.305866   73629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.305906   73629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.496595   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.532415   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 01:02:33.535715   73629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 01:02:33.535754   73629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.535801   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.538969   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.542030   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.543541   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.547976   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.590405   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707405   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.707474   73629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 01:02:33.707527   73629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.707547   73629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 01:02:33.707607   73629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.707634   73629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 01:02:33.707664   73629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.707692   73629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 01:02:33.707702   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707719   73629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.707644   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707579   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707567   73629 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 01:02:33.707811   73629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.707837   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.707744   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:33.749514   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.749567   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.749526   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.749651   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.749653   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:33.749617   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.876817   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:33.876959   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:33.877007   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 01:02:33.877062   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:33.881993   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:33.882119   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.014317   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 01:02:34.014413   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 01:02:34.014425   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.014481   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 01:02:34.014556   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 01:02:34.027700   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 01:02:34.100252   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 01:02:34.100326   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 01:02:34.100379   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:34.100439   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:34.130937   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 01:02:34.130960   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.130981   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 01:02:34.131011   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 01:02:34.131048   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 01:02:34.131079   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:34.131135   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:34.142699   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 01:02:34.142753   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 01:02:34.142813   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 01:02:34.142843   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:34.735158   73629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231330   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.100295108s)
	I0914 01:02:36.231367   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 01:02:36.231387   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.100235159s)
	I0914 01:02:36.231390   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231405   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 01:02:36.231441   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 01:02:36.231459   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.100351573s)
	I0914 01:02:36.231493   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 01:02:36.231500   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.088634897s)
	I0914 01:02:36.231521   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 01:02:36.231559   73629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.496369503s)
	I0914 01:02:36.231595   73629 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 01:02:36.231625   73629 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:36.231664   73629 ssh_runner.go:195] Run: which crictl
	I0914 01:02:34.228189   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.228631   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.228660   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.228585   75017 retry.go:31] will retry after 519.90738ms: waiting for machine to come up
	I0914 01:02:34.750224   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:34.750700   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:34.750730   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:34.750649   75017 retry.go:31] will retry after 724.174833ms: waiting for machine to come up
	I0914 01:02:35.476426   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:35.477009   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:35.477037   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:35.476949   75017 retry.go:31] will retry after 757.259366ms: waiting for machine to come up
	I0914 01:02:36.235973   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:36.236553   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:36.236586   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:36.236499   75017 retry.go:31] will retry after 967.854956ms: waiting for machine to come up
	I0914 01:02:37.206285   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:37.206818   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:37.206842   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:37.206766   75017 retry.go:31] will retry after 1.476721336s: waiting for machine to come up
	I0914 01:02:38.685010   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:38.685423   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:38.685452   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:38.685374   75017 retry.go:31] will retry after 1.193706152s: waiting for machine to come up
	I0914 01:02:38.184158   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.952689716s)
	I0914 01:02:38.184200   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 01:02:38.184233   73629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184305   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 01:02:38.184233   73629 ssh_runner.go:235] Completed: which crictl: (1.952548907s)
	I0914 01:02:38.184388   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:38.237461   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:41.474383   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.29005343s)
	I0914 01:02:41.474423   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 01:02:41.474461   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474543   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 01:02:41.474461   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.236964583s)
	I0914 01:02:41.474622   73629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:39.880518   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:39.880991   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:39.881023   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:39.880939   75017 retry.go:31] will retry after 1.629348889s: waiting for machine to come up
	I0914 01:02:41.511974   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:41.512493   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:41.512531   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:41.512434   75017 retry.go:31] will retry after 2.550783353s: waiting for machine to come up
	I0914 01:02:43.355564   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880999227s)
	I0914 01:02:43.355604   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 01:02:43.355626   73629 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355660   73629 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881013827s)
	I0914 01:02:43.355677   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 01:02:43.355702   73629 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 01:02:43.355842   73629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:45.319422   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.96371492s)
	I0914 01:02:45.319474   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 01:02:45.319481   73629 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963607489s)
	I0914 01:02:45.319505   73629 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:45.319509   73629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 01:02:45.319564   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 01:02:44.065833   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:44.066273   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:44.066305   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:44.066249   75017 retry.go:31] will retry after 3.446023159s: waiting for machine to come up
	I0914 01:02:47.514152   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:47.514640   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | unable to find current IP address of domain old-k8s-version-431084 in network mk-old-k8s-version-431084
	I0914 01:02:47.514662   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | I0914 01:02:47.514597   75017 retry.go:31] will retry after 3.153049876s: waiting for machine to come up
	I0914 01:02:47.294812   73629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.975219517s)
	I0914 01:02:47.294852   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 01:02:47.294877   73629 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:47.294926   73629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 01:02:48.241618   73629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 01:02:48.241662   73629 cache_images.go:123] Successfully loaded all cached images
	I0914 01:02:48.241667   73629 cache_images.go:92] duration metric: took 14.937786321s to LoadCachedImages
	I0914 01:02:48.241679   73629 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.1 crio true true} ...
	I0914 01:02:48.241779   73629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-057857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:02:48.241839   73629 ssh_runner.go:195] Run: crio config
	I0914 01:02:48.299424   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:48.299449   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:48.299460   73629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:02:48.299478   73629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-057857 NodeName:no-preload-057857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:02:48.299629   73629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-057857"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:02:48.299720   73629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:02:48.310310   73629 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:02:48.310392   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:02:48.320029   73629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 01:02:48.336827   73629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:02:48.353176   73629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0914 01:02:48.369892   73629 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0914 01:02:48.374036   73629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:48.386045   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:48.498562   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:02:48.515123   73629 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857 for IP: 192.168.39.129
	I0914 01:02:48.515148   73629 certs.go:194] generating shared ca certs ...
	I0914 01:02:48.515161   73629 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:02:48.515351   73629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:02:48.515407   73629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:02:48.515416   73629 certs.go:256] generating profile certs ...
	I0914 01:02:48.515519   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/client.key
	I0914 01:02:48.515597   73629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key.4fda827d
	I0914 01:02:48.515651   73629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key
	I0914 01:02:48.515781   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:02:48.515842   73629 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:02:48.515855   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:02:48.515880   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:02:48.515903   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:02:48.515936   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:02:48.515990   73629 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:48.516660   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:02:48.547402   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:02:48.575316   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:02:48.616430   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:02:48.650609   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:02:48.675077   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:02:48.702119   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:02:48.725322   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/no-preload-057857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:02:48.747897   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:02:48.770849   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:02:48.793817   73629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:02:48.817950   73629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:02:48.834050   73629 ssh_runner.go:195] Run: openssl version
	I0914 01:02:48.839564   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:02:48.850022   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854382   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.854469   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:02:48.860039   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:02:48.870374   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:02:48.880753   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.884933   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.885005   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:02:48.890405   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:02:48.900587   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:02:48.910979   73629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915167   73629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.915229   73629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:02:48.920599   73629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:02:48.930918   73629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:02:48.935391   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:02:48.941138   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:02:48.946868   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:02:48.952815   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:02:48.958496   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:02:48.964307   73629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:02:48.970059   73629 kubeadm.go:392] StartCluster: {Name:no-preload-057857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-057857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:02:48.970160   73629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:02:48.970222   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.008841   73629 cri.go:89] found id: ""
	I0914 01:02:49.008923   73629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:02:49.018636   73629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:02:49.018654   73629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:02:49.018703   73629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:02:49.027998   73629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:02:49.028913   73629 kubeconfig.go:125] found "no-preload-057857" server: "https://192.168.39.129:8443"
	I0914 01:02:49.030931   73629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:02:49.040012   73629 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0914 01:02:49.040050   73629 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:02:49.040061   73629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:02:49.040115   73629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:02:49.074921   73629 cri.go:89] found id: ""
	I0914 01:02:49.074986   73629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:02:49.093601   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:02:49.104570   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:02:49.104610   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:02:49.104655   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:02:49.114807   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:02:49.114862   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:02:49.124394   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:02:49.133068   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:02:49.133133   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:02:49.142418   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.151523   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:02:49.151592   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:02:49.161020   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:02:49.170076   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:02:49.170147   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:02:49.179975   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:02:49.189079   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:49.301467   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.330274   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028758641s)
	I0914 01:02:50.330313   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.537276   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.602665   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:50.686243   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:02:50.686349   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.186449   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.686520   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:51.709099   73629 api_server.go:72] duration metric: took 1.022841344s to wait for apiserver process to appear ...
	I0914 01:02:51.709131   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:02:51.709161   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:52.316392   74318 start.go:364] duration metric: took 3m15.297121866s to acquireMachinesLock for "embed-certs-880490"
	I0914 01:02:52.316477   74318 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:02:52.316486   74318 fix.go:54] fixHost starting: 
	I0914 01:02:52.316891   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:02:52.316951   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:02:52.334940   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0914 01:02:52.335383   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:02:52.335930   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:02:52.335959   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:02:52.336326   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:02:52.336535   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:02:52.336679   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:02:52.338453   74318 fix.go:112] recreateIfNeeded on embed-certs-880490: state=Stopped err=<nil>
	I0914 01:02:52.338479   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	W0914 01:02:52.338630   74318 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:02:52.340868   74318 out.go:177] * Restarting existing kvm2 VM for "embed-certs-880490" ...
	I0914 01:02:50.668838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669412   74039 main.go:141] libmachine: (old-k8s-version-431084) Found IP for machine: 192.168.61.116
	I0914 01:02:50.669443   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has current primary IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.669452   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserving static IP address...
	I0914 01:02:50.669902   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.669934   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | skip adding static IP to network mk-old-k8s-version-431084 - found existing host DHCP lease matching {name: "old-k8s-version-431084", mac: "52:54:00:d9:88:87", ip: "192.168.61.116"}
	I0914 01:02:50.669948   74039 main.go:141] libmachine: (old-k8s-version-431084) Reserved static IP address: 192.168.61.116
	I0914 01:02:50.669963   74039 main.go:141] libmachine: (old-k8s-version-431084) Waiting for SSH to be available...
	I0914 01:02:50.670001   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Getting to WaitForSSH function...
	I0914 01:02:50.672774   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673288   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.673316   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.673525   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH client type: external
	I0914 01:02:50.673555   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa (-rw-------)
	I0914 01:02:50.673579   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:02:50.673590   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | About to run SSH command:
	I0914 01:02:50.673608   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | exit 0
	I0914 01:02:50.804056   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | SSH cmd err, output: <nil>: 
	I0914 01:02:50.804451   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetConfigRaw
	I0914 01:02:50.805102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:50.807835   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808260   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.808292   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.808602   74039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/config.json ...
	I0914 01:02:50.808835   74039 machine.go:93] provisionDockerMachine start ...
	I0914 01:02:50.808870   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:50.809148   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.811522   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.811943   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.811999   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.812096   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.812286   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812446   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.812594   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.812809   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.813063   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.813088   74039 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:02:50.923990   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:02:50.924026   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924261   74039 buildroot.go:166] provisioning hostname "old-k8s-version-431084"
	I0914 01:02:50.924292   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:50.924488   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:50.926872   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927229   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:50.927259   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:50.927384   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:50.927552   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927700   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:50.927820   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:50.927952   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:50.928127   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:50.928138   74039 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-431084 && echo "old-k8s-version-431084" | sudo tee /etc/hostname
	I0914 01:02:51.051577   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-431084
	
	I0914 01:02:51.051605   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.054387   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054784   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.054825   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.054993   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.055209   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055376   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.055535   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.055729   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.055963   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.055983   74039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-431084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-431084/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-431084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:02:51.172741   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:02:51.172774   74039 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:02:51.172797   74039 buildroot.go:174] setting up certificates
	I0914 01:02:51.172806   74039 provision.go:84] configureAuth start
	I0914 01:02:51.172815   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetMachineName
	I0914 01:02:51.173102   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:51.176408   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.176830   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.176870   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.177039   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.179595   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.179923   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.179956   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.180146   74039 provision.go:143] copyHostCerts
	I0914 01:02:51.180204   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:02:51.180213   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:02:51.180269   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:02:51.180371   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:02:51.180379   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:02:51.180399   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:02:51.180453   74039 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:02:51.180459   74039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:02:51.180489   74039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:02:51.180537   74039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-431084 san=[127.0.0.1 192.168.61.116 localhost minikube old-k8s-version-431084]
	I0914 01:02:51.673643   74039 provision.go:177] copyRemoteCerts
	I0914 01:02:51.673699   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:02:51.673724   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.676739   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677100   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.677136   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.677273   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.677470   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.677596   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.677726   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:51.761918   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:02:51.784811   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:02:51.807650   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:02:51.833214   74039 provision.go:87] duration metric: took 660.387757ms to configureAuth
	I0914 01:02:51.833254   74039 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:02:51.833502   74039 config.go:182] Loaded profile config "old-k8s-version-431084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 01:02:51.833595   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:51.836386   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.836838   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:51.836897   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:51.837042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:51.837245   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837399   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:51.837530   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:51.837702   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:51.837936   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:51.837966   74039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:02:52.068754   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:02:52.068790   74039 machine.go:96] duration metric: took 1.259920192s to provisionDockerMachine
	I0914 01:02:52.068803   74039 start.go:293] postStartSetup for "old-k8s-version-431084" (driver="kvm2")
	I0914 01:02:52.068817   74039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:02:52.068851   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.069201   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:02:52.069236   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.072638   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073045   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.073073   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.073287   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.073506   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.073696   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.073884   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.161464   74039 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:02:52.166620   74039 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:02:52.166645   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:02:52.166724   74039 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:02:52.166825   74039 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:02:52.166939   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:02:52.177000   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:02:52.199705   74039 start.go:296] duration metric: took 130.88607ms for postStartSetup
	I0914 01:02:52.199747   74039 fix.go:56] duration metric: took 20.351125848s for fixHost
	I0914 01:02:52.199772   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.202484   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.202834   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.202877   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.203042   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.203220   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203358   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.203716   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.203928   74039 main.go:141] libmachine: Using SSH client type: native
	I0914 01:02:52.204096   74039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0914 01:02:52.204106   74039 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:02:52.316235   74039 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275772.293932338
	
	I0914 01:02:52.316261   74039 fix.go:216] guest clock: 1726275772.293932338
	I0914 01:02:52.316272   74039 fix.go:229] Guest: 2024-09-14 01:02:52.293932338 +0000 UTC Remote: 2024-09-14 01:02:52.199751432 +0000 UTC m=+233.328100415 (delta=94.180906ms)
	I0914 01:02:52.316310   74039 fix.go:200] guest clock delta is within tolerance: 94.180906ms
	I0914 01:02:52.316317   74039 start.go:83] releasing machines lock for "old-k8s-version-431084", held for 20.467723923s
	I0914 01:02:52.316351   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.316618   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:52.319514   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.319946   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.319972   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.320140   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320719   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.320986   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .DriverName
	I0914 01:02:52.321108   74039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:02:52.321158   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.321201   74039 ssh_runner.go:195] Run: cat /version.json
	I0914 01:02:52.321230   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHHostname
	I0914 01:02:52.324098   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324350   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324602   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324671   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324684   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.324773   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:52.324821   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:52.324857   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.324921   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHPort
	I0914 01:02:52.325009   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325188   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.325204   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHKeyPath
	I0914 01:02:52.325356   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetSSHUsername
	I0914 01:02:52.325500   74039 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/old-k8s-version-431084/id_rsa Username:docker}
	I0914 01:02:52.435613   74039 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:52.442824   74039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:02:52.591490   74039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:02:52.598893   74039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:02:52.598993   74039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:02:52.614168   74039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:02:52.614195   74039 start.go:495] detecting cgroup driver to use...
	I0914 01:02:52.614259   74039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:02:52.632521   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:02:52.648069   74039 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:02:52.648135   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:02:52.662421   74039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:02:52.676903   74039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:02:52.812277   74039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:02:52.956945   74039 docker.go:233] disabling docker service ...
	I0914 01:02:52.957019   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:02:52.977766   74039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:02:52.993090   74039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:02:53.131546   74039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:02:53.269480   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:02:53.283971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:02:53.304720   74039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 01:02:53.304774   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.318959   74039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:02:53.319036   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.333889   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.346067   74039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:02:53.356806   74039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:02:53.367778   74039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:02:53.378068   74039 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:02:53.378133   74039 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:02:53.397150   74039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:02:53.407614   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:02:53.561214   74039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:02:53.661879   74039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:02:53.661957   74039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:02:53.668477   74039 start.go:563] Will wait 60s for crictl version
	I0914 01:02:53.668538   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:53.672447   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:02:53.712522   74039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:02:53.712654   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.749876   74039 ssh_runner.go:195] Run: crio --version
	I0914 01:02:53.790332   74039 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 01:02:53.791738   74039 main.go:141] libmachine: (old-k8s-version-431084) Calling .GetIP
	I0914 01:02:53.794683   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795205   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:88:87", ip: ""} in network mk-old-k8s-version-431084: {Iface:virbr2 ExpiryTime:2024-09-14 02:02:42 +0000 UTC Type:0 Mac:52:54:00:d9:88:87 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:old-k8s-version-431084 Clientid:01:52:54:00:d9:88:87}
	I0914 01:02:53.795244   74039 main.go:141] libmachine: (old-k8s-version-431084) DBG | domain old-k8s-version-431084 has defined IP address 192.168.61.116 and MAC address 52:54:00:d9:88:87 in network mk-old-k8s-version-431084
	I0914 01:02:53.795497   74039 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 01:02:53.800496   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:02:53.815087   74039 kubeadm.go:883] updating cluster {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:02:53.815209   74039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 01:02:53.815248   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:53.872463   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:53.872548   74039 ssh_runner.go:195] Run: which lz4
	I0914 01:02:53.877221   74039 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:02:53.882237   74039 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:02:53.882290   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 01:02:55.366458   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:02:55.366488   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:02:55.366503   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.509895   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.509940   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:55.710204   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:55.717844   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:55.717879   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.209555   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.215532   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:02:56.215564   73629 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:02:56.709213   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:02:56.715063   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:02:56.731997   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:02:56.732028   73629 api_server.go:131] duration metric: took 5.022889118s to wait for apiserver health ...
	I0914 01:02:56.732040   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:02:56.732049   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:02:56.733972   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:02:52.342255   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Start
	I0914 01:02:52.342431   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring networks are active...
	I0914 01:02:52.343477   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network default is active
	I0914 01:02:52.344017   74318 main.go:141] libmachine: (embed-certs-880490) Ensuring network mk-embed-certs-880490 is active
	I0914 01:02:52.344470   74318 main.go:141] libmachine: (embed-certs-880490) Getting domain xml...
	I0914 01:02:52.345354   74318 main.go:141] libmachine: (embed-certs-880490) Creating domain...
	I0914 01:02:53.615671   74318 main.go:141] libmachine: (embed-certs-880490) Waiting to get IP...
	I0914 01:02:53.616604   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.616981   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.617032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.616947   75183 retry.go:31] will retry after 228.049754ms: waiting for machine to come up
	I0914 01:02:53.846401   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:53.846842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:53.846897   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:53.846806   75183 retry.go:31] will retry after 278.911209ms: waiting for machine to come up
	I0914 01:02:54.127493   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.128123   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.128169   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.128059   75183 retry.go:31] will retry after 382.718021ms: waiting for machine to come up
	I0914 01:02:54.512384   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:54.512941   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:54.512970   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:54.512893   75183 retry.go:31] will retry after 500.959108ms: waiting for machine to come up
	I0914 01:02:55.015176   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.015721   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.015745   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.015642   75183 retry.go:31] will retry after 721.663757ms: waiting for machine to come up
	I0914 01:02:55.738556   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:55.739156   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:55.739199   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:55.739120   75183 retry.go:31] will retry after 939.149999ms: waiting for machine to come up
	I0914 01:02:56.679538   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:56.679957   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:56.679988   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:56.679885   75183 retry.go:31] will retry after 893.052555ms: waiting for machine to come up
	I0914 01:02:56.735649   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:02:56.748523   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:02:56.795945   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:02:56.818331   73629 system_pods.go:59] 8 kube-system pods found
	I0914 01:02:56.818376   73629 system_pods.go:61] "coredns-7c65d6cfc9-hvscv" [8ee8fbcf-d9c0-48f4-b825-17bc3710a24c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:02:56.818389   73629 system_pods.go:61] "etcd-no-preload-057857" [170f4860-5576-4578-9b44-60caf7977808] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:02:56.818401   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [e00feeb1-5f2f-4f0f-b027-1705447b8141] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:02:56.818412   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [e84ef357-e72f-4c69-ba0a-7f3df212ec6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:02:56.818422   73629 system_pods.go:61] "kube-proxy-hc6bw" [ee8b6e39-3c42-48ed-83e7-47887bab2458] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:02:56.818430   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [5fbc79f1-74d2-4d6b-b432-20836c14605d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:02:56.818439   73629 system_pods.go:61] "metrics-server-6867b74b74-644mh" [6a0e055a-6fe6-4817-a7ee-b9a80295f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:02:56.818447   73629 system_pods.go:61] "storage-provisioner" [7080e61f-8dce-4136-a0bb-aa9b61c7145b] Running
	I0914 01:02:56.818459   73629 system_pods.go:74] duration metric: took 22.489415ms to wait for pod list to return data ...
	I0914 01:02:56.818468   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:02:56.823922   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:02:56.823959   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:02:56.823974   73629 node_conditions.go:105] duration metric: took 5.497006ms to run NodePressure ...
	I0914 01:02:56.823996   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:02:57.135213   73629 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139724   73629 kubeadm.go:739] kubelet initialised
	I0914 01:02:57.139748   73629 kubeadm.go:740] duration metric: took 4.505865ms waiting for restarted kubelet to initialise ...
	I0914 01:02:57.139757   73629 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:02:57.146864   73629 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.156573   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156599   73629 pod_ready.go:82] duration metric: took 9.700759ms for pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.156609   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "coredns-7c65d6cfc9-hvscv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.156615   73629 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:57.164928   73629 pod_ready.go:98] node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164953   73629 pod_ready.go:82] duration metric: took 8.33025ms for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	E0914 01:02:57.164962   73629 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-057857" hosting pod "etcd-no-preload-057857" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-057857" has status "Ready":"False"
	I0914 01:02:57.164968   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:02:55.490643   74039 crio.go:462] duration metric: took 1.613461545s to copy over tarball
	I0914 01:02:55.490741   74039 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:02:58.694403   74039 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203629957s)
	I0914 01:02:58.694444   74039 crio.go:469] duration metric: took 3.203762168s to extract the tarball
	I0914 01:02:58.694454   74039 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:02:58.755119   74039 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:02:58.799456   74039 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 01:02:58.799488   74039 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 01:02:58.799565   74039 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:58.799581   74039 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.799635   74039 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.799640   74039 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.799668   74039 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 01:02:58.799804   74039 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.799904   74039 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.800067   74039 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801046   74039 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:58.801066   74039 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:58.801187   74039 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:58.801437   74039 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:58.801478   74039 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 01:02:58.801496   74039 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:58.801531   74039 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:58.802084   74039 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:02:57.575016   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:57.575551   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:57.575578   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:57.575510   75183 retry.go:31] will retry after 1.06824762s: waiting for machine to come up
	I0914 01:02:58.645284   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:02:58.645842   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:02:58.645873   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:02:58.645790   75183 retry.go:31] will retry after 1.660743923s: waiting for machine to come up
	I0914 01:03:00.308232   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:00.308783   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:00.308803   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:00.308742   75183 retry.go:31] will retry after 1.771456369s: waiting for machine to come up
	I0914 01:02:59.422152   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:01.674525   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:02:59.030467   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.040905   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 01:02:59.041637   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.049717   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.051863   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.053628   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.075458   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.122587   74039 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 01:02:59.122649   74039 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.122694   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192629   74039 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 01:02:59.192663   74039 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 01:02:59.192676   74039 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 01:02:59.192701   74039 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.192726   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.192752   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.218991   74039 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 01:02:59.219039   74039 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.218992   74039 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 01:02:59.219046   74039 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 01:02:59.219074   74039 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.219087   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219079   74039 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.219128   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.219156   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241387   74039 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 01:02:59.241431   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.241432   74039 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.241460   74039 ssh_runner.go:195] Run: which crictl
	I0914 01:02:59.241514   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.241543   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.241618   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.241662   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.241628   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.391098   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.391129   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.391149   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.391190   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.391211   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.391262   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.391282   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.543886   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 01:02:59.565053   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 01:02:59.565138   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.565086   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 01:02:59.565206   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 01:02:59.567509   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 01:02:59.744028   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 01:02:59.749030   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 01:02:59.749135   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 01:02:59.749068   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 01:02:59.749163   74039 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 01:02:59.749213   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 01:02:59.749301   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 01:02:59.783185   74039 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 01:03:00.074466   74039 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:00.217589   74039 cache_images.go:92] duration metric: took 1.418079742s to LoadCachedImages
	W0914 01:03:00.217695   74039 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19640-5422/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0914 01:03:00.217715   74039 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.20.0 crio true true} ...
	I0914 01:03:00.217852   74039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-431084 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:00.217951   74039 ssh_runner.go:195] Run: crio config
	I0914 01:03:00.267811   74039 cni.go:84] Creating CNI manager for ""
	I0914 01:03:00.267836   74039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:00.267846   74039 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:00.267863   74039 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-431084 NodeName:old-k8s-version-431084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:03:00.267973   74039 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-431084"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:00.268032   74039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:03:00.277395   74039 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:00.277470   74039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:00.286642   74039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 01:03:00.303641   74039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:00.323035   74039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 01:03:00.340013   74039 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:00.343553   74039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:00.355675   74039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:00.470260   74039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:00.486376   74039 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084 for IP: 192.168.61.116
	I0914 01:03:00.486403   74039 certs.go:194] generating shared ca certs ...
	I0914 01:03:00.486424   74039 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:00.486630   74039 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:00.486690   74039 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:00.486704   74039 certs.go:256] generating profile certs ...
	I0914 01:03:00.486825   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/client.key
	I0914 01:03:00.486914   74039 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key.58151014
	I0914 01:03:00.486966   74039 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key
	I0914 01:03:00.487121   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:00.487161   74039 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:00.487175   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:00.487225   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:00.487262   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:00.487295   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:00.487361   74039 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:00.488031   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:00.531038   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:00.566037   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:00.594526   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:00.619365   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:03:00.646111   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:00.674106   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:00.698652   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/old-k8s-version-431084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:03:00.738710   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:00.766042   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:00.799693   74039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:00.828073   74039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:00.850115   74039 ssh_runner.go:195] Run: openssl version
	I0914 01:03:00.857710   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:00.868792   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874809   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.874880   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:00.882396   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:00.897411   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:00.911908   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917507   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.917582   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:00.924892   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:00.937517   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:00.948659   74039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953773   74039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.953854   74039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:00.959585   74039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:00.971356   74039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:00.977289   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:00.983455   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:00.989387   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:00.995459   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:01.001308   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:01.007637   74039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:01.013553   74039 kubeadm.go:392] StartCluster: {Name:old-k8s-version-431084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-431084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:01.013632   74039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:01.013693   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.061972   74039 cri.go:89] found id: ""
	I0914 01:03:01.062049   74039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:01.072461   74039 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:01.072495   74039 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:01.072555   74039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:01.083129   74039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:01.084233   74039 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-431084" does not appear in /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:01.084925   74039 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-5422/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-431084" cluster setting kubeconfig missing "old-k8s-version-431084" context setting]
	I0914 01:03:01.085786   74039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:01.087599   74039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:01.100670   74039 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0914 01:03:01.100703   74039 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:01.100716   74039 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:01.100769   74039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:01.143832   74039 cri.go:89] found id: ""
	I0914 01:03:01.143899   74039 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:01.164518   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:01.177317   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:01.177342   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:01.177397   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:01.186457   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:01.186533   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:01.195675   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:01.204280   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:01.204348   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:01.213525   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.222534   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:01.222605   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:01.232541   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:01.241878   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:01.241951   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:01.251997   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:01.262176   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:01.388974   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.628238   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.239228787s)
	I0914 01:03:02.628264   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.860570   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:02.986963   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:03.078655   74039 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:03.078749   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:03.578872   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:02.081646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:02.082105   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:02.082136   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:02.082053   75183 retry.go:31] will retry after 2.272470261s: waiting for machine to come up
	I0914 01:03:04.356903   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:04.357553   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:04.357582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:04.357486   75183 retry.go:31] will retry after 3.016392455s: waiting for machine to come up
	I0914 01:03:03.676305   73629 pod_ready.go:103] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:05.172750   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:05.172781   73629 pod_ready.go:82] duration metric: took 8.007806258s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:05.172795   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:07.180531   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:04.079089   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:04.578934   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.079829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:05.579129   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.079218   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:06.579774   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.079736   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:07.579576   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.079139   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:08.579211   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.664652   73455 start.go:364] duration metric: took 51.742534192s to acquireMachinesLock for "default-k8s-diff-port-754332"
	I0914 01:03:11.664707   73455 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:03:11.664716   73455 fix.go:54] fixHost starting: 
	I0914 01:03:11.665112   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:11.665142   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:11.685523   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0914 01:03:11.686001   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:11.686634   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:11.686660   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:11.687059   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:11.687233   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:11.687384   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:11.689053   73455 fix.go:112] recreateIfNeeded on default-k8s-diff-port-754332: state=Stopped err=<nil>
	I0914 01:03:11.689081   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	W0914 01:03:11.689345   73455 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:03:11.692079   73455 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-754332" ...
	I0914 01:03:07.375314   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:07.375773   74318 main.go:141] libmachine: (embed-certs-880490) DBG | unable to find current IP address of domain embed-certs-880490 in network mk-embed-certs-880490
	I0914 01:03:07.375819   74318 main.go:141] libmachine: (embed-certs-880490) DBG | I0914 01:03:07.375738   75183 retry.go:31] will retry after 3.07124256s: waiting for machine to come up
	I0914 01:03:10.448964   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449474   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has current primary IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.449523   74318 main.go:141] libmachine: (embed-certs-880490) Found IP for machine: 192.168.50.105
	I0914 01:03:10.449567   74318 main.go:141] libmachine: (embed-certs-880490) Reserving static IP address...
	I0914 01:03:10.449956   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.449982   74318 main.go:141] libmachine: (embed-certs-880490) DBG | skip adding static IP to network mk-embed-certs-880490 - found existing host DHCP lease matching {name: "embed-certs-880490", mac: "52:54:00:2c:d0:a9", ip: "192.168.50.105"}
	I0914 01:03:10.449994   74318 main.go:141] libmachine: (embed-certs-880490) Reserved static IP address: 192.168.50.105
	I0914 01:03:10.450008   74318 main.go:141] libmachine: (embed-certs-880490) Waiting for SSH to be available...
	I0914 01:03:10.450019   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Getting to WaitForSSH function...
	I0914 01:03:10.452294   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452698   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.452727   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.452910   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH client type: external
	I0914 01:03:10.452971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa (-rw-------)
	I0914 01:03:10.453012   74318 main.go:141] libmachine: (embed-certs-880490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:10.453032   74318 main.go:141] libmachine: (embed-certs-880490) DBG | About to run SSH command:
	I0914 01:03:10.453050   74318 main.go:141] libmachine: (embed-certs-880490) DBG | exit 0
	I0914 01:03:10.579774   74318 main.go:141] libmachine: (embed-certs-880490) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:10.580176   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetConfigRaw
	I0914 01:03:10.580840   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.583268   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583601   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.583633   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.583869   74318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/config.json ...
	I0914 01:03:10.584086   74318 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:10.584107   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:10.584306   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.586393   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586678   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.586717   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.586850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.587042   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587190   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.587307   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.587514   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.587811   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.587828   74318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:10.696339   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:10.696369   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696634   74318 buildroot.go:166] provisioning hostname "embed-certs-880490"
	I0914 01:03:10.696660   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.696891   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.699583   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.699992   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.700023   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.700160   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.700335   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700492   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.700610   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.700750   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.700922   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.700934   74318 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-880490 && echo "embed-certs-880490" | sudo tee /etc/hostname
	I0914 01:03:10.822819   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-880490
	
	I0914 01:03:10.822850   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.825597   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.825859   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.825894   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.826020   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:10.826215   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826366   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:10.826494   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:10.826726   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:10.826906   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:10.826927   74318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-880490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-880490/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-880490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:10.943975   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:10.944013   74318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:10.944042   74318 buildroot.go:174] setting up certificates
	I0914 01:03:10.944055   74318 provision.go:84] configureAuth start
	I0914 01:03:10.944074   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetMachineName
	I0914 01:03:10.944351   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:10.946782   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947145   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.947173   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.947381   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:10.950231   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950628   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:10.950654   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:10.950903   74318 provision.go:143] copyHostCerts
	I0914 01:03:10.950978   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:10.950988   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:10.951042   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:10.951185   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:10.951195   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:10.951216   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:10.951304   74318 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:10.951311   74318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:10.951332   74318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:10.951385   74318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.embed-certs-880490 san=[127.0.0.1 192.168.50.105 embed-certs-880490 localhost minikube]
	I0914 01:03:11.029044   74318 provision.go:177] copyRemoteCerts
	I0914 01:03:11.029127   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:11.029151   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.031950   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032310   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.032349   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.032506   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.032667   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.032832   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.032961   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.118140   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:11.143626   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:03:11.167233   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:11.190945   74318 provision.go:87] duration metric: took 246.872976ms to configureAuth
	I0914 01:03:11.190975   74318 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:11.191307   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:11.191421   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.194582   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.194936   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.194963   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.195137   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.195309   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195480   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.195621   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.195801   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.195962   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.195978   74318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:11.420029   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:11.420069   74318 machine.go:96] duration metric: took 835.970014ms to provisionDockerMachine
	I0914 01:03:11.420084   74318 start.go:293] postStartSetup for "embed-certs-880490" (driver="kvm2")
	I0914 01:03:11.420095   74318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:11.420134   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.420432   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:11.420463   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.423591   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.423930   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.423960   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.424119   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.424326   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.424481   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.424618   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.510235   74318 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:11.514295   74318 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:11.514317   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:11.514388   74318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:11.514501   74318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:11.514619   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:11.523522   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:11.546302   74318 start.go:296] duration metric: took 126.204848ms for postStartSetup
	I0914 01:03:11.546343   74318 fix.go:56] duration metric: took 19.229858189s for fixHost
	I0914 01:03:11.546367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.549113   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549517   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.549540   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.549759   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.549967   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550113   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.550274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.550424   74318 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:11.550613   74318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0914 01:03:11.550625   74318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:11.664454   74318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275791.643970925
	
	I0914 01:03:11.664476   74318 fix.go:216] guest clock: 1726275791.643970925
	I0914 01:03:11.664485   74318 fix.go:229] Guest: 2024-09-14 01:03:11.643970925 +0000 UTC Remote: 2024-09-14 01:03:11.546348011 +0000 UTC m=+214.663331844 (delta=97.622914ms)
	I0914 01:03:11.664552   74318 fix.go:200] guest clock delta is within tolerance: 97.622914ms
	I0914 01:03:11.664565   74318 start.go:83] releasing machines lock for "embed-certs-880490", held for 19.348109343s
	I0914 01:03:11.664599   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.664894   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:11.667565   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.667947   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.667971   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.668125   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668629   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668804   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:11.668915   74318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:11.668959   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.669030   74318 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:11.669056   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:11.671646   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.671881   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672053   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672078   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672159   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:11.672178   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672184   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:11.672343   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672352   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:11.672472   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672530   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:11.672594   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.672635   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:11.672739   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:11.752473   74318 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:11.781816   74318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:09.187104   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:09.679205   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.679230   73629 pod_ready.go:82] duration metric: took 4.506427401s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.679242   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683761   73629 pod_ready.go:93] pod "kube-proxy-hc6bw" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.683797   73629 pod_ready.go:82] duration metric: took 4.532579ms for pod "kube-proxy-hc6bw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.683810   73629 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687725   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:09.687742   73629 pod_ready.go:82] duration metric: took 3.924978ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:09.687752   73629 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:11.694703   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:11.693430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Start
	I0914 01:03:11.693658   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring networks are active...
	I0914 01:03:11.694585   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network default is active
	I0914 01:03:11.695009   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Ensuring network mk-default-k8s-diff-port-754332 is active
	I0914 01:03:11.695379   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Getting domain xml...
	I0914 01:03:11.696097   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Creating domain...
	I0914 01:03:11.922203   74318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:11.929637   74318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:11.929704   74318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:11.945610   74318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:11.945640   74318 start.go:495] detecting cgroup driver to use...
	I0914 01:03:11.945716   74318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:11.966186   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:11.984256   74318 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:11.984327   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:12.001358   74318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:12.016513   74318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:12.145230   74318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:12.280392   74318 docker.go:233] disabling docker service ...
	I0914 01:03:12.280470   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:12.295187   74318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:12.307896   74318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:12.462859   74318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:12.601394   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:12.615458   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:12.633392   74318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:12.633446   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.643467   74318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:12.643550   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.653855   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.664536   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.676420   74318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:12.688452   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.699678   74318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.719681   74318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:12.729655   74318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:12.739444   74318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:12.739503   74318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:12.757569   74318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:12.772472   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:12.904292   74318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:12.996207   74318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:12.996275   74318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:13.001326   74318 start.go:563] Will wait 60s for crictl version
	I0914 01:03:13.001409   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:03:13.004949   74318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:13.042806   74318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:13.042869   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.069182   74318 ssh_runner.go:195] Run: crio --version
	I0914 01:03:13.102316   74318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:09.079607   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:09.579709   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.079890   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:10.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.079655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:11.579251   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.079647   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:12.578961   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.078789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.579228   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:13.103619   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetIP
	I0914 01:03:13.107163   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107579   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:13.107606   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:13.107856   74318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:13.111808   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:13.123612   74318 kubeadm.go:883] updating cluster {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:13.123741   74318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:13.123796   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:13.157827   74318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:13.157886   74318 ssh_runner.go:195] Run: which lz4
	I0914 01:03:13.161725   74318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:13.165793   74318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:13.165823   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:14.502844   74318 crio.go:462] duration metric: took 1.341144325s to copy over tarball
	I0914 01:03:14.502921   74318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:16.658839   74318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155876521s)
	I0914 01:03:16.658872   74318 crio.go:469] duration metric: took 2.155996077s to extract the tarball
	I0914 01:03:16.658881   74318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:16.695512   74318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:16.741226   74318 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:16.741248   74318 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:16.741256   74318 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.31.1 crio true true} ...
	I0914 01:03:16.741362   74318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-880490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:16.741422   74318 ssh_runner.go:195] Run: crio config
	I0914 01:03:16.795052   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:16.795080   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:16.795099   74318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:16.795122   74318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-880490 NodeName:embed-certs-880490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:16.795294   74318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-880490"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:16.795375   74318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:16.805437   74318 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:16.805510   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:16.815328   74318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 01:03:16.832869   74318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:16.850496   74318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 01:03:16.868678   74318 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:16.872574   74318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:16.885388   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:13.695618   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:16.194928   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:12.997809   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting to get IP...
	I0914 01:03:12.998678   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999184   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:12.999255   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:12.999166   75355 retry.go:31] will retry after 276.085713ms: waiting for machine to come up
	I0914 01:03:13.276698   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277068   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.277100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.277033   75355 retry.go:31] will retry after 359.658898ms: waiting for machine to come up
	I0914 01:03:13.638812   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639364   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.639400   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.639310   75355 retry.go:31] will retry after 347.842653ms: waiting for machine to come up
	I0914 01:03:13.989107   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:13.989743   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:13.989617   75355 retry.go:31] will retry after 544.954892ms: waiting for machine to come up
	I0914 01:03:14.536215   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536599   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:14.536746   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:14.536548   75355 retry.go:31] will retry after 540.74487ms: waiting for machine to come up
	I0914 01:03:15.079430   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079929   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.079962   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.079900   75355 retry.go:31] will retry after 624.051789ms: waiting for machine to come up
	I0914 01:03:15.705350   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705866   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:15.705895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:15.705822   75355 retry.go:31] will retry after 913.087412ms: waiting for machine to come up
	I0914 01:03:16.621100   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621588   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:16.621615   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:16.621550   75355 retry.go:31] will retry after 1.218937641s: waiting for machine to come up
	I0914 01:03:14.079547   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:14.579430   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.078855   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:15.579007   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.078944   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:16.578925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.079628   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.579755   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.079735   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:18.578894   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:17.008601   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:17.025633   74318 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490 for IP: 192.168.50.105
	I0914 01:03:17.025661   74318 certs.go:194] generating shared ca certs ...
	I0914 01:03:17.025682   74318 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:17.025870   74318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:17.025932   74318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:17.025944   74318 certs.go:256] generating profile certs ...
	I0914 01:03:17.026055   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/client.key
	I0914 01:03:17.026136   74318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key.1d9c4f03
	I0914 01:03:17.026194   74318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key
	I0914 01:03:17.026353   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:17.026399   74318 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:17.026412   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:17.026442   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:17.026479   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:17.026517   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:17.026585   74318 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:17.027302   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:17.073430   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:17.115694   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:17.156764   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:17.185498   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 01:03:17.210458   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:17.234354   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:17.257082   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/embed-certs-880490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:17.281002   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:17.305089   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:17.328639   74318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:17.353079   74318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:17.370952   74318 ssh_runner.go:195] Run: openssl version
	I0914 01:03:17.376686   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:17.387154   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391440   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.391506   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:17.397094   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:17.407795   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:17.418403   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422922   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.422991   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:17.428548   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:17.438881   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:17.452077   74318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456733   74318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.456808   74318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:17.463043   74318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:17.473820   74318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:17.478139   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:17.484568   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:17.490606   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:17.496644   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:17.502549   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:17.508738   74318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:17.514951   74318 kubeadm.go:392] StartCluster: {Name:embed-certs-880490 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-880490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:17.515059   74318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:17.515117   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.556586   74318 cri.go:89] found id: ""
	I0914 01:03:17.556655   74318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:17.566504   74318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:17.566526   74318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:17.566585   74318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:17.575767   74318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:17.576927   74318 kubeconfig.go:125] found "embed-certs-880490" server: "https://192.168.50.105:8443"
	I0914 01:03:17.579029   74318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:17.588701   74318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0914 01:03:17.588739   74318 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:17.588759   74318 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:17.588815   74318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:17.624971   74318 cri.go:89] found id: ""
	I0914 01:03:17.625042   74318 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:17.641321   74318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:17.650376   74318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:17.650406   74318 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:17.650452   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:03:17.658792   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:17.658857   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:17.667931   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:03:17.676298   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:17.676363   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:17.684693   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.694657   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:17.694723   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:17.703725   74318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:03:17.711916   74318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:17.711982   74318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:17.723930   74318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:17.735911   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:17.861518   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.208062   74318 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346499976s)
	I0914 01:03:19.208106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.411072   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.475449   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:19.573447   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:19.573550   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.074711   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.574387   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.073875   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.574581   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.590186   74318 api_server.go:72] duration metric: took 2.016718872s to wait for apiserver process to appear ...
	I0914 01:03:21.590215   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:21.590245   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:18.695203   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:20.696480   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:17.842334   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842919   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:17.842953   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:17.842854   75355 retry.go:31] will retry after 1.539721303s: waiting for machine to come up
	I0914 01:03:19.384714   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:19.385267   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:19.385171   75355 retry.go:31] will retry after 1.792148708s: waiting for machine to come up
	I0914 01:03:21.178528   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:21.179069   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:21.178982   75355 retry.go:31] will retry after 2.88848049s: waiting for machine to come up
	I0914 01:03:19.079873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:19.578818   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.079093   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:20.579730   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.079635   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:21.579697   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.079259   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:22.579061   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:23.578908   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.640462   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.640496   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:24.640519   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:24.709172   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:24.709203   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:25.090643   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.095569   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.095597   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:25.591312   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:25.602159   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:25.602197   74318 api_server.go:103] status: https://192.168.50.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:26.091344   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:03:26.097408   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:03:26.103781   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:26.103833   74318 api_server.go:131] duration metric: took 4.513611032s to wait for apiserver health ...
	I0914 01:03:26.103841   74318 cni.go:84] Creating CNI manager for ""
	I0914 01:03:26.103848   74318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:26.105632   74318 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:26.106923   74318 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:26.149027   74318 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:26.169837   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:26.180447   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:26.180487   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:26.180495   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:26.180503   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:26.180508   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:26.180514   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:26.180519   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:26.180527   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:26.180531   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:26.180537   74318 system_pods.go:74] duration metric: took 10.676342ms to wait for pod list to return data ...
	I0914 01:03:26.180543   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:26.187023   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:26.187062   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:26.187078   74318 node_conditions.go:105] duration metric: took 6.529612ms to run NodePressure ...
	I0914 01:03:26.187099   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:26.507253   74318 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511395   74318 kubeadm.go:739] kubelet initialised
	I0914 01:03:26.511417   74318 kubeadm.go:740] duration metric: took 4.141857ms waiting for restarted kubelet to initialise ...
	I0914 01:03:26.511424   74318 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:26.517034   74318 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.522320   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522355   74318 pod_ready.go:82] duration metric: took 5.286339ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.522368   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.522378   74318 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.526903   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526924   74318 pod_ready.go:82] duration metric: took 4.536652ms for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.526932   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "etcd-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.526937   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.531025   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531048   74318 pod_ready.go:82] duration metric: took 4.104483ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.531057   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.531063   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:26.573867   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573904   74318 pod_ready.go:82] duration metric: took 42.83377ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.573918   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.573926   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:23.195282   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:25.694842   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:24.068632   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069026   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:24.069059   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:24.068958   75355 retry.go:31] will retry after 2.264547039s: waiting for machine to come up
	I0914 01:03:26.336477   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.336987   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | unable to find current IP address of domain default-k8s-diff-port-754332 in network mk-default-k8s-diff-port-754332
	I0914 01:03:26.337015   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | I0914 01:03:26.336927   75355 retry.go:31] will retry after 3.313315265s: waiting for machine to come up
	I0914 01:03:26.973594   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973624   74318 pod_ready.go:82] duration metric: took 399.687272ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:26.973637   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-proxy-566n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:26.973646   74318 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.376077   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376110   74318 pod_ready.go:82] duration metric: took 402.455131ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.376125   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.376144   74318 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:27.773855   74318 pod_ready.go:98] node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773882   74318 pod_ready.go:82] duration metric: took 397.725056ms for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:27.773893   74318 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-880490" hosting pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:27.773899   74318 pod_ready.go:39] duration metric: took 1.262466922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:27.773915   74318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:27.785056   74318 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:27.785076   74318 kubeadm.go:597] duration metric: took 10.21854476s to restartPrimaryControlPlane
	I0914 01:03:27.785086   74318 kubeadm.go:394] duration metric: took 10.270142959s to StartCluster
	I0914 01:03:27.785101   74318 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.785186   74318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:27.787194   74318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:27.787494   74318 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:27.787603   74318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:27.787702   74318 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-880490"
	I0914 01:03:27.787722   74318 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-880490"
	W0914 01:03:27.787730   74318 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:27.787738   74318 config.go:182] Loaded profile config "embed-certs-880490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:27.787765   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.787778   74318 addons.go:69] Setting metrics-server=true in profile "embed-certs-880490"
	I0914 01:03:27.787775   74318 addons.go:69] Setting default-storageclass=true in profile "embed-certs-880490"
	I0914 01:03:27.787832   74318 addons.go:234] Setting addon metrics-server=true in "embed-certs-880490"
	W0914 01:03:27.787856   74318 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:27.787854   74318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-880490"
	I0914 01:03:27.787924   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.788230   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788281   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788370   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788404   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.788412   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.788438   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.789892   74318 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:27.791273   74318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:27.803662   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0914 01:03:27.803875   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0914 01:03:27.804070   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804406   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.804727   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804746   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.804891   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.804908   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.805085   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805290   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.805662   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805706   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.805858   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.805912   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.806091   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0914 01:03:27.806527   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.807073   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.807096   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.807431   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.807633   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.813722   74318 addons.go:234] Setting addon default-storageclass=true in "embed-certs-880490"
	W0914 01:03:27.813747   74318 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:27.813781   74318 host.go:66] Checking if "embed-certs-880490" exists ...
	I0914 01:03:27.814216   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.814263   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.822018   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0914 01:03:27.822523   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.823111   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.823135   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.823490   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.823604   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0914 01:03:27.823683   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.824287   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.824761   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.824780   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.825178   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.825391   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.825669   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.827101   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.828145   74318 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:27.829204   74318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:24.079149   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:24.579686   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.079667   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:25.579150   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:26.579690   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.079319   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.579499   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.079479   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:28.579170   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:27.829657   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0914 01:03:27.830001   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:27.830029   74318 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:27.830048   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.830333   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.830741   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.830761   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.831056   74318 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:27.831076   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:27.831094   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.831127   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.832083   74318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:27.832123   74318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:27.833716   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834165   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.834187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.834345   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.834552   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.834711   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.834873   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.835187   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835760   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.835776   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.835961   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.836089   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.836213   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.836291   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.856418   74318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 01:03:27.856927   74318 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:27.857486   74318 main.go:141] libmachine: Using API Version  1
	I0914 01:03:27.857511   74318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:27.857862   74318 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:27.858030   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetState
	I0914 01:03:27.859994   74318 main.go:141] libmachine: (embed-certs-880490) Calling .DriverName
	I0914 01:03:27.860236   74318 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:27.860256   74318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:27.860274   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHHostname
	I0914 01:03:27.863600   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864250   74318 main.go:141] libmachine: (embed-certs-880490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:d0:a9", ip: ""} in network mk-embed-certs-880490: {Iface:virbr1 ExpiryTime:2024-09-14 02:03:03 +0000 UTC Type:0 Mac:52:54:00:2c:d0:a9 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:embed-certs-880490 Clientid:01:52:54:00:2c:d0:a9}
	I0914 01:03:27.864290   74318 main.go:141] libmachine: (embed-certs-880490) DBG | domain embed-certs-880490 has defined IP address 192.168.50.105 and MAC address 52:54:00:2c:d0:a9 in network mk-embed-certs-880490
	I0914 01:03:27.864371   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHPort
	I0914 01:03:27.864585   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHKeyPath
	I0914 01:03:27.864742   74318 main.go:141] libmachine: (embed-certs-880490) Calling .GetSSHUsername
	I0914 01:03:27.864892   74318 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/embed-certs-880490/id_rsa Username:docker}
	I0914 01:03:27.995675   74318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:28.014477   74318 node_ready.go:35] waiting up to 6m0s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:28.078574   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:28.091764   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:28.091804   74318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:28.126389   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:28.126416   74318 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:28.183427   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:28.184131   74318 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:28.184153   74318 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:28.236968   74318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:29.207173   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023713329s)
	I0914 01:03:29.207245   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207259   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207312   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207331   74318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.128718888s)
	I0914 01:03:29.207339   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207353   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207367   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207589   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207607   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207616   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207624   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.207639   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207662   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207702   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207713   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207722   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207740   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207759   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207769   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.207806   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.207822   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.207855   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.207969   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208013   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.208197   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208229   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.208247   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208254   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.208263   74318 addons.go:475] Verifying addon metrics-server=true in "embed-certs-880490"
	I0914 01:03:29.208353   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.208387   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.214795   74318 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:29.214819   74318 main.go:141] libmachine: (embed-certs-880490) Calling .Close
	I0914 01:03:29.215061   74318 main.go:141] libmachine: (embed-certs-880490) DBG | Closing plugin on server side
	I0914 01:03:29.215086   74318 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:29.215094   74318 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:29.216947   74318 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0914 01:03:29.218195   74318 addons.go:510] duration metric: took 1.430602165s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0914 01:03:30.018994   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:28.195749   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:30.694597   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:29.652693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653211   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Found IP for machine: 192.168.72.203
	I0914 01:03:29.653239   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has current primary IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.653252   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserving static IP address...
	I0914 01:03:29.653679   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.653702   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Reserved static IP address: 192.168.72.203
	I0914 01:03:29.653715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | skip adding static IP to network mk-default-k8s-diff-port-754332 - found existing host DHCP lease matching {name: "default-k8s-diff-port-754332", mac: "52:54:00:a6:67:78", ip: "192.168.72.203"}
	I0914 01:03:29.653734   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Getting to WaitForSSH function...
	I0914 01:03:29.653747   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Waiting for SSH to be available...
	I0914 01:03:29.655991   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.656395   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.656564   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH client type: external
	I0914 01:03:29.656602   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa (-rw-------)
	I0914 01:03:29.656631   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 01:03:29.656645   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | About to run SSH command:
	I0914 01:03:29.656660   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | exit 0
	I0914 01:03:29.779885   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | SSH cmd err, output: <nil>: 
	I0914 01:03:29.780280   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetConfigRaw
	I0914 01:03:29.780973   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:29.783480   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.783882   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.783917   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.784196   73455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/config.json ...
	I0914 01:03:29.784498   73455 machine.go:93] provisionDockerMachine start ...
	I0914 01:03:29.784523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:29.784750   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.786873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787143   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.787172   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.787287   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.787479   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787634   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.787738   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.787896   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.788081   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.788094   73455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:03:29.887917   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 01:03:29.887942   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888184   73455 buildroot.go:166] provisioning hostname "default-k8s-diff-port-754332"
	I0914 01:03:29.888214   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:29.888398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:29.891121   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891609   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:29.891640   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:29.891878   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:29.892039   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892191   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:29.892365   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:29.892545   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:29.892719   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:29.892731   73455 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-754332 && echo "default-k8s-diff-port-754332" | sudo tee /etc/hostname
	I0914 01:03:30.009695   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-754332
	
	I0914 01:03:30.009724   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.012210   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012534   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.012562   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.012765   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.012949   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013098   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.013229   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.013418   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.013628   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.013655   73455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-754332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-754332/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-754332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:03:30.120378   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:03:30.120405   73455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19640-5422/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-5422/.minikube}
	I0914 01:03:30.120430   73455 buildroot.go:174] setting up certificates
	I0914 01:03:30.120443   73455 provision.go:84] configureAuth start
	I0914 01:03:30.120474   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetMachineName
	I0914 01:03:30.120717   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.123495   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.123895   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.123922   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.124092   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.126470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.126852   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.126873   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.127080   73455 provision.go:143] copyHostCerts
	I0914 01:03:30.127129   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem, removing ...
	I0914 01:03:30.127138   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem
	I0914 01:03:30.127192   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/ca.pem (1078 bytes)
	I0914 01:03:30.127285   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem, removing ...
	I0914 01:03:30.127294   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem
	I0914 01:03:30.127313   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/cert.pem (1123 bytes)
	I0914 01:03:30.127372   73455 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem, removing ...
	I0914 01:03:30.127382   73455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem
	I0914 01:03:30.127404   73455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-5422/.minikube/key.pem (1679 bytes)
	I0914 01:03:30.127447   73455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-754332 san=[127.0.0.1 192.168.72.203 default-k8s-diff-port-754332 localhost minikube]
	I0914 01:03:30.216498   73455 provision.go:177] copyRemoteCerts
	I0914 01:03:30.216559   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:03:30.216580   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.219146   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219539   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.219570   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.219768   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.219951   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.220089   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.220208   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.301855   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:03:30.324277   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 01:03:30.345730   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:03:30.369290   73455 provision.go:87] duration metric: took 248.829163ms to configureAuth
	I0914 01:03:30.369323   73455 buildroot.go:189] setting minikube options for container-runtime
	I0914 01:03:30.369573   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:30.369676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.372546   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.372897   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.372920   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.373188   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.373398   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.373693   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.373825   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.373982   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.373999   73455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:03:30.584051   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:03:30.584080   73455 machine.go:96] duration metric: took 799.56475ms to provisionDockerMachine
	I0914 01:03:30.584094   73455 start.go:293] postStartSetup for "default-k8s-diff-port-754332" (driver="kvm2")
	I0914 01:03:30.584106   73455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:03:30.584129   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.584449   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:03:30.584483   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.587327   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.587826   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.587861   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.588036   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.588195   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.588405   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.588568   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.671069   73455 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:03:30.675186   73455 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 01:03:30.675211   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/addons for local assets ...
	I0914 01:03:30.675281   73455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5422/.minikube/files for local assets ...
	I0914 01:03:30.675356   73455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem -> 126022.pem in /etc/ssl/certs
	I0914 01:03:30.675440   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:03:30.684470   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:30.708253   73455 start.go:296] duration metric: took 124.146362ms for postStartSetup
	I0914 01:03:30.708290   73455 fix.go:56] duration metric: took 19.043574544s for fixHost
	I0914 01:03:30.708317   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.710731   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711082   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.711113   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.711268   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.711451   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711610   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.711772   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.711948   73455 main.go:141] libmachine: Using SSH client type: native
	I0914 01:03:30.712120   73455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.203 22 <nil> <nil>}
	I0914 01:03:30.712131   73455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 01:03:30.812299   73455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726275810.776460586
	
	I0914 01:03:30.812327   73455 fix.go:216] guest clock: 1726275810.776460586
	I0914 01:03:30.812342   73455 fix.go:229] Guest: 2024-09-14 01:03:30.776460586 +0000 UTC Remote: 2024-09-14 01:03:30.708293415 +0000 UTC m=+353.376555108 (delta=68.167171ms)
	I0914 01:03:30.812370   73455 fix.go:200] guest clock delta is within tolerance: 68.167171ms
	I0914 01:03:30.812380   73455 start.go:83] releasing machines lock for "default-k8s-diff-port-754332", held for 19.147703384s
	I0914 01:03:30.812417   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.812715   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:30.815470   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815886   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.815924   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.815997   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816496   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816654   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: cat /version.json
	I0914 01:03:30.816857   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.816827   73455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:03:30.816943   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:30.819388   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819661   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819759   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.819801   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.819934   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820061   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:30.820093   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:30.820110   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820244   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820256   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:30.820382   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:30.820400   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.820516   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:30.820619   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:30.927966   73455 ssh_runner.go:195] Run: systemctl --version
	I0914 01:03:30.934220   73455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:03:31.078477   73455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 01:03:31.085527   73455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 01:03:31.085610   73455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:03:31.101213   73455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 01:03:31.101246   73455 start.go:495] detecting cgroup driver to use...
	I0914 01:03:31.101320   73455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:03:31.118541   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:03:31.133922   73455 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:03:31.133996   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:03:31.148427   73455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:03:31.162929   73455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:03:31.281620   73455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:03:31.417175   73455 docker.go:233] disabling docker service ...
	I0914 01:03:31.417244   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:03:31.431677   73455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:03:31.444504   73455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:03:31.595948   73455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:03:31.746209   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:03:31.764910   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:03:31.783199   73455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:03:31.783268   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.793541   73455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:03:31.793615   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.803864   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.814843   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.825601   73455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:03:31.836049   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.846312   73455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.864585   73455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:03:31.874861   73455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:03:31.884895   73455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 01:03:31.884953   73455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 01:03:31.897684   73455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:03:31.907461   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:32.025078   73455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:03:32.120427   73455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:03:32.120497   73455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:03:32.125250   73455 start.go:563] Will wait 60s for crictl version
	I0914 01:03:32.125313   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:03:32.128802   73455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:03:32.170696   73455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 01:03:32.170778   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.200683   73455 ssh_runner.go:195] Run: crio --version
	I0914 01:03:32.230652   73455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 01:03:32.231774   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetIP
	I0914 01:03:32.234336   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:32.234687   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:32.234845   73455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 01:03:32.238647   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:32.250489   73455 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:03:32.250610   73455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:03:32.250657   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:32.286097   73455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 01:03:32.286158   73455 ssh_runner.go:195] Run: which lz4
	I0914 01:03:32.290255   73455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 01:03:32.294321   73455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 01:03:32.294358   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 01:03:29.078829   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:29.578862   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.078918   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:30.579509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.078852   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:31.579516   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.079106   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.579535   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.079750   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:33.579665   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:32.518191   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:34.519482   74318 node_ready.go:53] node "embed-certs-880490" has status "Ready":"False"
	I0914 01:03:35.519639   74318 node_ready.go:49] node "embed-certs-880490" has status "Ready":"True"
	I0914 01:03:35.519666   74318 node_ready.go:38] duration metric: took 7.505156447s for node "embed-certs-880490" to be "Ready" ...
	I0914 01:03:35.519678   74318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:35.527730   74318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534521   74318 pod_ready.go:93] pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:35.534545   74318 pod_ready.go:82] duration metric: took 6.783225ms for pod "coredns-7c65d6cfc9-ssskq" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:35.534563   74318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:32.694869   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:34.699839   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.195182   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:33.584974   73455 crio.go:462] duration metric: took 1.294751283s to copy over tarball
	I0914 01:03:33.585123   73455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 01:03:35.744216   73455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159060899s)
	I0914 01:03:35.744248   73455 crio.go:469] duration metric: took 2.159192932s to extract the tarball
	I0914 01:03:35.744258   73455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 01:03:35.781892   73455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:03:35.822104   73455 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:03:35.822127   73455 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:03:35.822135   73455 kubeadm.go:934] updating node { 192.168.72.203 8444 v1.31.1 crio true true} ...
	I0914 01:03:35.822222   73455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-754332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:03:35.822304   73455 ssh_runner.go:195] Run: crio config
	I0914 01:03:35.866904   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:35.866931   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:35.866942   73455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:03:35.866964   73455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.203 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-754332 NodeName:default-k8s-diff-port-754332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:03:35.867130   73455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.203
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-754332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:03:35.867205   73455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:03:35.877591   73455 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:03:35.877683   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:03:35.887317   73455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0914 01:03:35.904426   73455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:03:35.920731   73455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0914 01:03:35.939238   73455 ssh_runner.go:195] Run: grep 192.168.72.203	control-plane.minikube.internal$ /etc/hosts
	I0914 01:03:35.943163   73455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:03:35.955568   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:36.075433   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:36.095043   73455 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332 for IP: 192.168.72.203
	I0914 01:03:36.095066   73455 certs.go:194] generating shared ca certs ...
	I0914 01:03:36.095081   73455 certs.go:226] acquiring lock for ca certs: {Name:mk34be5dca4d5e7be79350ed50e5d1cda46dbec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:36.095241   73455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key
	I0914 01:03:36.095292   73455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key
	I0914 01:03:36.095303   73455 certs.go:256] generating profile certs ...
	I0914 01:03:36.095378   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/client.key
	I0914 01:03:36.095438   73455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key.693e926b
	I0914 01:03:36.095470   73455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key
	I0914 01:03:36.095601   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem (1338 bytes)
	W0914 01:03:36.095629   73455 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602_empty.pem, impossibly tiny 0 bytes
	I0914 01:03:36.095643   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 01:03:36.095665   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:03:36.095688   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:03:36.095709   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/certs/key.pem (1679 bytes)
	I0914 01:03:36.095748   73455 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem (1708 bytes)
	I0914 01:03:36.096382   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:03:36.128479   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:03:36.173326   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:03:36.225090   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:03:36.260861   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 01:03:36.287480   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 01:03:36.312958   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:03:36.338212   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/default-k8s-diff-port-754332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:03:36.371168   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/ssl/certs/126022.pem --> /usr/share/ca-certificates/126022.pem (1708 bytes)
	I0914 01:03:36.398963   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:03:36.426920   73455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-5422/.minikube/certs/12602.pem --> /usr/share/ca-certificates/12602.pem (1338 bytes)
	I0914 01:03:36.462557   73455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:03:36.484730   73455 ssh_runner.go:195] Run: openssl version
	I0914 01:03:36.493195   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126022.pem && ln -fs /usr/share/ca-certificates/126022.pem /etc/ssl/certs/126022.pem"
	I0914 01:03:36.508394   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.514954   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:45 /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.515030   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126022.pem
	I0914 01:03:36.521422   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126022.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:03:36.533033   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:03:36.544949   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549664   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.549727   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:03:36.555419   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:03:36.566365   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12602.pem && ln -fs /usr/share/ca-certificates/12602.pem /etc/ssl/certs/12602.pem"
	I0914 01:03:36.577663   73455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582901   73455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:45 /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.582971   73455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12602.pem
	I0914 01:03:36.589291   73455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12602.pem /etc/ssl/certs/51391683.0"
	I0914 01:03:36.603524   73455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:03:36.609867   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:03:36.619284   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:03:36.627246   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:03:36.635744   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:03:36.643493   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:03:36.650143   73455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:03:36.656352   73455 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-754332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-754332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:03:36.656430   73455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:03:36.656476   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.700763   73455 cri.go:89] found id: ""
	I0914 01:03:36.700841   73455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:03:36.711532   73455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:03:36.711553   73455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:03:36.711601   73455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:03:36.722598   73455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:03:36.723698   73455 kubeconfig.go:125] found "default-k8s-diff-port-754332" server: "https://192.168.72.203:8444"
	I0914 01:03:36.726559   73455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:03:36.737411   73455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.203
	I0914 01:03:36.737450   73455 kubeadm.go:1160] stopping kube-system containers ...
	I0914 01:03:36.737463   73455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 01:03:36.737519   73455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:03:36.781843   73455 cri.go:89] found id: ""
	I0914 01:03:36.781923   73455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 01:03:36.802053   73455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:03:36.812910   73455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:03:36.812933   73455 kubeadm.go:157] found existing configuration files:
	
	I0914 01:03:36.812987   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 01:03:36.823039   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:03:36.823108   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:03:36.832740   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 01:03:36.841429   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:03:36.841497   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:03:36.850571   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.859463   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:03:36.859530   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:03:36.868677   73455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 01:03:36.877742   73455 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:03:36.877799   73455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:03:36.887331   73455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:03:36.897395   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.006836   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:34.079548   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:34.578789   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.079064   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:35.579631   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.079470   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:36.579459   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.079704   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.579111   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.078898   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.579305   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:37.541560   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.041409   74318 pod_ready.go:103] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:40.540976   74318 pod_ready.go:93] pod "etcd-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.541001   74318 pod_ready.go:82] duration metric: took 5.006431907s for pod "etcd-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.541010   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545912   74318 pod_ready.go:93] pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.545930   74318 pod_ready.go:82] duration metric: took 4.913892ms for pod "kube-apiserver-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.545939   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550467   74318 pod_ready.go:93] pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.550486   74318 pod_ready.go:82] duration metric: took 4.539367ms for pod "kube-controller-manager-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.550495   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554835   74318 pod_ready.go:93] pod "kube-proxy-566n8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.554858   74318 pod_ready.go:82] duration metric: took 4.357058ms for pod "kube-proxy-566n8" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.554869   74318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559287   74318 pod_ready.go:93] pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:40.559310   74318 pod_ready.go:82] duration metric: took 4.34883ms for pod "kube-scheduler-embed-certs-880490" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:40.559321   74318 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:39.195934   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:41.695133   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:37.551834   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.766998   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.844309   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:37.937678   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:03:37.937756   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.438583   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:38.938054   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.438436   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.938739   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.955976   73455 api_server.go:72] duration metric: took 2.018296029s to wait for apiserver process to appear ...
	I0914 01:03:39.956007   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:03:39.956033   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:39.079732   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:39.578873   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.079509   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:40.579639   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.079505   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:41.579581   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.079258   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.579491   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.079572   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:43.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:42.403502   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.403535   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.403552   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.416634   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 01:03:42.416672   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 01:03:42.456884   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.507709   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.507740   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:42.956742   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:42.961066   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:42.961093   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.456175   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.462252   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:03:43.462284   73455 api_server.go:103] status: https://192.168.72.203:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:03:43.956927   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:03:43.961353   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:03:43.967700   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:03:43.967730   73455 api_server.go:131] duration metric: took 4.011716133s to wait for apiserver health ...
	I0914 01:03:43.967738   73455 cni.go:84] Creating CNI manager for ""
	I0914 01:03:43.967744   73455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:03:43.969746   73455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:03:43.971066   73455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:03:43.984009   73455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:03:44.003665   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:03:44.018795   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:03:44.018860   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:03:44.018879   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 01:03:44.018890   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:03:44.018904   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:03:44.018917   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 01:03:44.018926   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 01:03:44.018931   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:03:44.018938   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 01:03:44.018952   73455 system_pods.go:74] duration metric: took 15.266342ms to wait for pod list to return data ...
	I0914 01:03:44.018958   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:03:44.025060   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:03:44.025085   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:03:44.025095   73455 node_conditions.go:105] duration metric: took 6.132281ms to run NodePressure ...
	I0914 01:03:44.025112   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 01:03:44.297212   73455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301907   73455 kubeadm.go:739] kubelet initialised
	I0914 01:03:44.301934   73455 kubeadm.go:740] duration metric: took 4.67527ms waiting for restarted kubelet to initialise ...
	I0914 01:03:44.301942   73455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:44.307772   73455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.313725   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313768   73455 pod_ready.go:82] duration metric: took 5.941724ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.313784   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.313805   73455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.319213   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319244   73455 pod_ready.go:82] duration metric: took 5.42808ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.319258   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.319266   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.324391   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324416   73455 pod_ready.go:82] duration metric: took 5.141454ms for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.324431   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.324439   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.407389   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407420   73455 pod_ready.go:82] duration metric: took 82.971806ms for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.407435   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.407443   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:44.807480   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807514   73455 pod_ready.go:82] duration metric: took 400.064486ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:44.807525   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-proxy-f9qhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:44.807534   73455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.207543   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207568   73455 pod_ready.go:82] duration metric: took 400.02666ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.207580   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.207586   73455 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:45.607672   73455 pod_ready.go:98] node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607704   73455 pod_ready.go:82] duration metric: took 400.110479ms for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:03:45.607718   73455 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-754332" hosting pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:45.607728   73455 pod_ready.go:39] duration metric: took 1.305776989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:45.607767   73455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:03:45.622336   73455 ops.go:34] apiserver oom_adj: -16
	I0914 01:03:45.622358   73455 kubeadm.go:597] duration metric: took 8.910797734s to restartPrimaryControlPlane
	I0914 01:03:45.622369   73455 kubeadm.go:394] duration metric: took 8.966028708s to StartCluster
	I0914 01:03:45.622390   73455 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.622484   73455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:03:45.625196   73455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:03:45.625509   73455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:03:45.625555   73455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:03:45.625687   73455 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625714   73455 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625726   73455 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:03:45.625720   73455 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625728   73455 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-754332"
	I0914 01:03:45.625752   73455 config.go:182] Loaded profile config "default-k8s-diff-port-754332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:03:45.625761   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.625769   73455 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.625779   73455 addons.go:243] addon metrics-server should already be in state true
	I0914 01:03:45.625753   73455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-754332"
	I0914 01:03:45.625818   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.626185   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626210   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626228   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626243   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.626187   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.626310   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.627306   73455 out.go:177] * Verifying Kubernetes components...
	I0914 01:03:45.628955   73455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:03:45.643497   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0914 01:03:45.643779   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I0914 01:03:45.643969   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644171   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644331   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0914 01:03:45.644521   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644544   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644667   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.644764   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.644788   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.644862   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645141   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.645159   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.645182   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.645342   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.645578   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.645617   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.645687   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.646157   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.646199   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.649274   73455 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-754332"
	W0914 01:03:45.649298   73455 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:03:45.649330   73455 host.go:66] Checking if "default-k8s-diff-port-754332" exists ...
	I0914 01:03:45.649722   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.649767   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.663455   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0914 01:03:45.663487   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0914 01:03:45.664007   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664128   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.664652   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664655   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.664681   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.664702   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.665053   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665063   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.665227   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.665374   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.666915   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.667360   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.668778   73455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:03:45.668789   73455 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:03:42.566973   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.065222   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:44.194383   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:46.695202   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:45.669904   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0914 01:03:45.670250   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:03:45.670262   73455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:03:45.670277   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.670350   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.670420   73455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.670434   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:03:45.670450   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.671128   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.671146   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.671536   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.672089   73455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:03:45.672133   73455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:03:45.674171   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674649   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.674676   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.674898   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.674966   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675073   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675221   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675315   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.675348   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.675385   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.675491   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.675647   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.675829   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.675938   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.708991   73455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0914 01:03:45.709537   73455 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:03:45.710070   73455 main.go:141] libmachine: Using API Version  1
	I0914 01:03:45.710091   73455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:03:45.710429   73455 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:03:45.710592   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetState
	I0914 01:03:45.712228   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .DriverName
	I0914 01:03:45.712565   73455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.712584   73455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:03:45.712604   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHHostname
	I0914 01:03:45.715656   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716116   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:67:78", ip: ""} in network mk-default-k8s-diff-port-754332: {Iface:virbr4 ExpiryTime:2024-09-14 02:03:22 +0000 UTC Type:0 Mac:52:54:00:a6:67:78 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:default-k8s-diff-port-754332 Clientid:01:52:54:00:a6:67:78}
	I0914 01:03:45.716140   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | domain default-k8s-diff-port-754332 has defined IP address 192.168.72.203 and MAC address 52:54:00:a6:67:78 in network mk-default-k8s-diff-port-754332
	I0914 01:03:45.716337   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHPort
	I0914 01:03:45.716512   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHKeyPath
	I0914 01:03:45.716672   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .GetSSHUsername
	I0914 01:03:45.716797   73455 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/default-k8s-diff-port-754332/id_rsa Username:docker}
	I0914 01:03:45.822712   73455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:03:45.842455   73455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:45.916038   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:03:45.920701   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:03:45.946755   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:03:45.946784   73455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:03:46.023074   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:03:46.023105   73455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:03:46.084355   73455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:46.084382   73455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:03:46.153505   73455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:03:47.080118   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16404588s)
	I0914 01:03:47.080168   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080181   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080180   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.159444015s)
	I0914 01:03:47.080221   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080236   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080523   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080549   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.080579   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080591   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.080605   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.080616   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.080810   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.080827   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.081982   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.081994   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082005   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.082010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.082367   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.082382   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.082369   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.090963   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.090986   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.091360   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.091411   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.091381   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.233665   73455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08012624s)
	I0914 01:03:47.233709   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.233721   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.233970   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.233991   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234001   73455 main.go:141] libmachine: Making call to close driver server
	I0914 01:03:47.234010   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) Calling .Close
	I0914 01:03:47.234264   73455 main.go:141] libmachine: (default-k8s-diff-port-754332) DBG | Closing plugin on server side
	I0914 01:03:47.234331   73455 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:03:47.234342   73455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:03:47.234353   73455 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-754332"
	I0914 01:03:47.236117   73455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:03:47.237116   73455 addons.go:510] duration metric: took 1.611577276s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:03:44.078881   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:44.579009   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.079332   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:45.579671   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.079420   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:46.579674   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.079399   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.579668   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.079540   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:48.579640   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:47.067158   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.565947   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.566635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:49.194971   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:51.693620   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:47.846195   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:50.346549   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:49.079772   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:49.579679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.079692   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:50.579238   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.079870   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:51.579747   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.079882   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:52.579344   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.079398   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:53.578880   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.067654   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.567116   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:53.693702   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:55.693929   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:52.846490   73455 node_ready.go:53] node "default-k8s-diff-port-754332" has status "Ready":"False"
	I0914 01:03:53.345438   73455 node_ready.go:49] node "default-k8s-diff-port-754332" has status "Ready":"True"
	I0914 01:03:53.345467   73455 node_ready.go:38] duration metric: took 7.502970865s for node "default-k8s-diff-port-754332" to be "Ready" ...
	I0914 01:03:53.345476   73455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:03:53.354593   73455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359868   73455 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.359891   73455 pod_ready.go:82] duration metric: took 5.265319ms for pod "coredns-7c65d6cfc9-5lgsh" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.359900   73455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364354   73455 pod_ready.go:93] pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:53.364384   73455 pod_ready.go:82] duration metric: took 4.476679ms for pod "etcd-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:53.364393   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:55.372299   73455 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:56.872534   73455 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:56.872559   73455 pod_ready.go:82] duration metric: took 3.508159097s for pod "kube-apiserver-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:56.872569   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:54.079669   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:54.579514   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.078960   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:55.579233   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.079419   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:56.579182   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.079087   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:57.579739   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.079247   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:58.579513   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.066327   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:01.565043   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:57.694827   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:00.193396   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:02.195235   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:58.879733   73455 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.378456   73455 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.378482   73455 pod_ready.go:82] duration metric: took 2.50590556s for pod "kube-controller-manager-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.378503   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382836   73455 pod_ready.go:93] pod "kube-proxy-f9qhk" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.382864   73455 pod_ready.go:82] duration metric: took 4.352317ms for pod "kube-proxy-f9qhk" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.382876   73455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386934   73455 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace has status "Ready":"True"
	I0914 01:03:59.386954   73455 pod_ready.go:82] duration metric: took 4.071453ms for pod "kube-scheduler-default-k8s-diff-port-754332" in "kube-system" namespace to be "Ready" ...
	I0914 01:03:59.386963   73455 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	I0914 01:04:01.393183   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:03:59.079331   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:03:59.579832   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.078827   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:00.579655   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.079144   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:01.579525   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.079174   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:02.579490   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:03.079284   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:03.079356   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:03.119558   74039 cri.go:89] found id: ""
	I0914 01:04:03.119588   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.119599   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:03.119607   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:03.119667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:03.157146   74039 cri.go:89] found id: ""
	I0914 01:04:03.157179   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.157190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:03.157197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:03.157263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:03.190300   74039 cri.go:89] found id: ""
	I0914 01:04:03.190328   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.190338   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:03.190345   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:03.190400   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:03.223492   74039 cri.go:89] found id: ""
	I0914 01:04:03.223516   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.223524   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:03.223530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:03.223578   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:03.261047   74039 cri.go:89] found id: ""
	I0914 01:04:03.261074   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.261082   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:03.261093   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:03.261139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:03.296868   74039 cri.go:89] found id: ""
	I0914 01:04:03.296896   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.296908   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:03.296915   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:03.296979   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:03.328823   74039 cri.go:89] found id: ""
	I0914 01:04:03.328858   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.328870   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:03.328877   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:03.328950   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:03.361686   74039 cri.go:89] found id: ""
	I0914 01:04:03.361711   74039 logs.go:276] 0 containers: []
	W0914 01:04:03.361720   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:03.361729   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:03.361740   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:03.414496   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:03.414537   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:03.429303   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:03.429333   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:03.549945   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:03.549964   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:03.549975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:03.630643   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:03.630687   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:03.565402   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.565971   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:04.694554   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.694978   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:03.393484   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:05.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:06.176389   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:06.205468   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:06.205530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:06.237656   74039 cri.go:89] found id: ""
	I0914 01:04:06.237694   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.237706   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:06.237714   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:06.237776   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:06.274407   74039 cri.go:89] found id: ""
	I0914 01:04:06.274445   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.274458   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:06.274465   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:06.274557   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:06.310069   74039 cri.go:89] found id: ""
	I0914 01:04:06.310104   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.310114   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:06.310121   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:06.310185   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:06.346572   74039 cri.go:89] found id: ""
	I0914 01:04:06.346608   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.346619   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:06.346626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:06.346690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:06.380900   74039 cri.go:89] found id: ""
	I0914 01:04:06.380928   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.380936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:06.380941   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:06.381028   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:06.418361   74039 cri.go:89] found id: ""
	I0914 01:04:06.418387   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.418395   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:06.418401   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:06.418459   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:06.458802   74039 cri.go:89] found id: ""
	I0914 01:04:06.458834   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.458845   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:06.458851   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:06.458921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:06.496195   74039 cri.go:89] found id: ""
	I0914 01:04:06.496222   74039 logs.go:276] 0 containers: []
	W0914 01:04:06.496232   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:06.496243   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:06.496274   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:06.583625   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:06.583660   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:06.583677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:06.667887   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:06.667930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:06.708641   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:06.708677   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:06.765650   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:06.765684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:08.065494   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.066365   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.194586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:11.694004   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:08.393863   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:10.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:09.280721   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:09.294228   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:09.294304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:09.329258   74039 cri.go:89] found id: ""
	I0914 01:04:09.329288   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.329299   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:09.329306   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:09.329368   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:09.361903   74039 cri.go:89] found id: ""
	I0914 01:04:09.361930   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.361940   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:09.361947   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:09.362005   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:09.397843   74039 cri.go:89] found id: ""
	I0914 01:04:09.397873   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.397885   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:09.397894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:09.397956   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:09.430744   74039 cri.go:89] found id: ""
	I0914 01:04:09.430776   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.430789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:09.430797   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:09.430858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:09.463593   74039 cri.go:89] found id: ""
	I0914 01:04:09.463623   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.463634   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:09.463641   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:09.463701   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:09.497207   74039 cri.go:89] found id: ""
	I0914 01:04:09.497240   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.497251   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:09.497259   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:09.497330   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:09.530199   74039 cri.go:89] found id: ""
	I0914 01:04:09.530231   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.530243   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:09.530251   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:09.530313   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:09.565145   74039 cri.go:89] found id: ""
	I0914 01:04:09.565173   74039 logs.go:276] 0 containers: []
	W0914 01:04:09.565180   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:09.565188   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:09.565199   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:09.603562   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:09.603594   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:09.654063   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:09.654105   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:09.667900   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:09.667928   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:09.740320   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:09.740349   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:09.740394   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.320693   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:12.333814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:12.333884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:12.371119   74039 cri.go:89] found id: ""
	I0914 01:04:12.371145   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.371156   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:12.371163   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:12.371223   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:12.408281   74039 cri.go:89] found id: ""
	I0914 01:04:12.408308   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.408318   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:12.408324   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:12.408371   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:12.451978   74039 cri.go:89] found id: ""
	I0914 01:04:12.452003   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.452011   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:12.452016   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:12.452076   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:12.490743   74039 cri.go:89] found id: ""
	I0914 01:04:12.490777   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.490789   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:12.490796   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:12.490851   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:12.523213   74039 cri.go:89] found id: ""
	I0914 01:04:12.523248   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.523260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:12.523271   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:12.523333   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:12.557555   74039 cri.go:89] found id: ""
	I0914 01:04:12.557582   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.557592   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:12.557601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:12.557665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:12.597590   74039 cri.go:89] found id: ""
	I0914 01:04:12.597624   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.597636   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:12.597643   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:12.597705   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:12.640688   74039 cri.go:89] found id: ""
	I0914 01:04:12.640718   74039 logs.go:276] 0 containers: []
	W0914 01:04:12.640729   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:12.640740   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:12.640753   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:12.698531   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:12.698566   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:12.752039   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:12.752078   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:12.767617   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:12.767649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:12.833226   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:12.833257   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:12.833284   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:12.565890   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.066774   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:13.694053   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.695720   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:12.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.395018   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:15.413936   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:15.426513   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:15.426590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:15.460515   74039 cri.go:89] found id: ""
	I0914 01:04:15.460558   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.460570   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:15.460579   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:15.460646   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:15.495856   74039 cri.go:89] found id: ""
	I0914 01:04:15.495883   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.495893   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:15.495901   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:15.495966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:15.531667   74039 cri.go:89] found id: ""
	I0914 01:04:15.531696   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.531707   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:15.531714   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:15.531779   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:15.568640   74039 cri.go:89] found id: ""
	I0914 01:04:15.568667   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.568674   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:15.568680   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:15.568732   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:15.601908   74039 cri.go:89] found id: ""
	I0914 01:04:15.601940   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.601950   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:15.601958   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:15.602019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:15.634636   74039 cri.go:89] found id: ""
	I0914 01:04:15.634671   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.634683   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:15.634690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:15.634761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:15.671873   74039 cri.go:89] found id: ""
	I0914 01:04:15.671904   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.671916   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:15.671923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:15.671986   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:15.712401   74039 cri.go:89] found id: ""
	I0914 01:04:15.712435   74039 logs.go:276] 0 containers: []
	W0914 01:04:15.712447   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:15.712457   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:15.712471   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:15.763623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:15.763664   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:15.779061   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:15.779098   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:15.854203   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:15.854235   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:15.854249   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:15.937926   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:15.937965   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.477313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:18.492814   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:18.492880   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:18.532232   74039 cri.go:89] found id: ""
	I0914 01:04:18.532263   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.532274   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:18.532282   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:18.532348   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:18.588062   74039 cri.go:89] found id: ""
	I0914 01:04:18.588152   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.588169   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:18.588177   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:18.588237   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:18.632894   74039 cri.go:89] found id: ""
	I0914 01:04:18.632922   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.632930   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:18.632936   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:18.632999   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:18.667578   74039 cri.go:89] found id: ""
	I0914 01:04:18.667608   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.667618   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:18.667626   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:18.667685   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:18.701016   74039 cri.go:89] found id: ""
	I0914 01:04:18.701046   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.701057   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:18.701064   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:18.701118   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:18.737857   74039 cri.go:89] found id: ""
	I0914 01:04:18.737880   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.737887   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:18.737893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:18.737941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:18.768501   74039 cri.go:89] found id: ""
	I0914 01:04:18.768540   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.768552   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:18.768560   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:18.768618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:18.802908   74039 cri.go:89] found id: ""
	I0914 01:04:18.802938   74039 logs.go:276] 0 containers: []
	W0914 01:04:18.802950   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:18.802960   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:18.802971   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:18.837795   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:18.837822   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:18.895312   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:18.895352   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:17.566051   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.065379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.193754   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:20.194750   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:17.893692   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:19.894255   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:18.909242   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:18.909273   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:18.977745   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:18.977767   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:18.977778   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:21.557261   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:21.570238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:21.570316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:21.608928   74039 cri.go:89] found id: ""
	I0914 01:04:21.608953   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.608961   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:21.608967   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:21.609017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:21.643746   74039 cri.go:89] found id: ""
	I0914 01:04:21.643779   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.643804   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:21.643811   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:21.643866   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:21.677807   74039 cri.go:89] found id: ""
	I0914 01:04:21.677835   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.677849   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:21.677855   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:21.677905   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:21.713612   74039 cri.go:89] found id: ""
	I0914 01:04:21.713642   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.713653   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:21.713661   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:21.713721   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:21.749512   74039 cri.go:89] found id: ""
	I0914 01:04:21.749540   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.749549   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:21.749555   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:21.749615   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:21.786425   74039 cri.go:89] found id: ""
	I0914 01:04:21.786450   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.786459   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:21.786465   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:21.786516   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:21.819996   74039 cri.go:89] found id: ""
	I0914 01:04:21.820025   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.820038   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:21.820047   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:21.820106   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:21.851761   74039 cri.go:89] found id: ""
	I0914 01:04:21.851814   74039 logs.go:276] 0 containers: []
	W0914 01:04:21.851827   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:21.851838   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:21.851851   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:21.905735   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:21.905771   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:21.919562   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:21.919591   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:21.995686   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:21.995714   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:21.995729   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:22.078494   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:22.078531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:22.066473   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.566106   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.567477   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.693308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.695491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:27.195505   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:22.393312   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.394123   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:26.893561   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:24.619273   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:24.632461   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:24.632525   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:24.669012   74039 cri.go:89] found id: ""
	I0914 01:04:24.669040   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.669049   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:24.669056   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:24.669133   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:24.704742   74039 cri.go:89] found id: ""
	I0914 01:04:24.704769   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.704777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:24.704782   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:24.704838   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:24.741361   74039 cri.go:89] found id: ""
	I0914 01:04:24.741391   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.741403   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:24.741411   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:24.741481   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:24.776499   74039 cri.go:89] found id: ""
	I0914 01:04:24.776528   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.776536   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:24.776542   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:24.776590   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:24.811910   74039 cri.go:89] found id: ""
	I0914 01:04:24.811941   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.811952   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:24.811966   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:24.812029   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:24.844869   74039 cri.go:89] found id: ""
	I0914 01:04:24.844897   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.844905   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:24.844911   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:24.844960   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:24.877756   74039 cri.go:89] found id: ""
	I0914 01:04:24.877787   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.877795   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:24.877800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:24.877860   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:24.916570   74039 cri.go:89] found id: ""
	I0914 01:04:24.916599   74039 logs.go:276] 0 containers: []
	W0914 01:04:24.916611   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:24.916620   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:24.916636   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:24.968296   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:24.968335   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:24.983098   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:24.983125   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:25.060697   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:25.060719   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:25.060731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:25.141206   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:25.141244   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:27.681781   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:27.694843   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:27.694924   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:27.732773   74039 cri.go:89] found id: ""
	I0914 01:04:27.732805   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.732817   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:27.732825   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:27.732884   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:27.770785   74039 cri.go:89] found id: ""
	I0914 01:04:27.770816   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.770827   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:27.770835   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:27.770895   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:27.806813   74039 cri.go:89] found id: ""
	I0914 01:04:27.806844   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.806852   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:27.806858   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:27.806909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:27.845167   74039 cri.go:89] found id: ""
	I0914 01:04:27.845196   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.845205   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:27.845210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:27.845261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:27.885949   74039 cri.go:89] found id: ""
	I0914 01:04:27.885978   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.885987   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:27.885993   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:27.886042   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:27.921831   74039 cri.go:89] found id: ""
	I0914 01:04:27.921860   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.921868   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:27.921874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:27.921933   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:27.954490   74039 cri.go:89] found id: ""
	I0914 01:04:27.954523   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.954533   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:27.954541   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:27.954596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:27.991224   74039 cri.go:89] found id: ""
	I0914 01:04:27.991265   74039 logs.go:276] 0 containers: []
	W0914 01:04:27.991276   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:27.991323   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:27.991342   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:28.065679   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:28.065715   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:28.109231   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:28.109268   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:28.162579   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:28.162621   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:28.176584   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:28.176616   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:28.252368   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:29.065571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.065941   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:29.694186   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:32.194418   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:28.894614   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:31.393725   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:30.753542   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:30.766584   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:30.766653   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:30.800207   74039 cri.go:89] found id: ""
	I0914 01:04:30.800235   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.800246   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:30.800253   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:30.800322   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:30.838475   74039 cri.go:89] found id: ""
	I0914 01:04:30.838503   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.838513   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:30.838520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:30.838595   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:30.872524   74039 cri.go:89] found id: ""
	I0914 01:04:30.872547   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.872556   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:30.872561   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:30.872617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:30.908389   74039 cri.go:89] found id: ""
	I0914 01:04:30.908413   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.908421   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:30.908426   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:30.908474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:30.940252   74039 cri.go:89] found id: ""
	I0914 01:04:30.940279   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.940290   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:30.940297   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:30.940362   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:30.973920   74039 cri.go:89] found id: ""
	I0914 01:04:30.973951   74039 logs.go:276] 0 containers: []
	W0914 01:04:30.973962   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:30.973968   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:30.974019   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:31.006817   74039 cri.go:89] found id: ""
	I0914 01:04:31.006842   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.006850   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:31.006856   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:31.006921   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:31.044900   74039 cri.go:89] found id: ""
	I0914 01:04:31.044925   74039 logs.go:276] 0 containers: []
	W0914 01:04:31.044934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:31.044941   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:31.044950   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:31.058367   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:31.058401   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:31.124567   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:31.124593   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:31.124610   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:31.208851   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:31.208887   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:31.247991   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:31.248026   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:33.799361   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:33.812630   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:33.812689   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:33.847752   74039 cri.go:89] found id: ""
	I0914 01:04:33.847779   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.847808   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:33.847817   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:33.847875   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:33.885592   74039 cri.go:89] found id: ""
	I0914 01:04:33.885617   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.885626   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:33.885632   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:33.885690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:33.565744   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.566454   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:34.195568   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:36.694536   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.394329   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:35.893895   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:33.922783   74039 cri.go:89] found id: ""
	I0914 01:04:33.922808   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.922816   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:33.922822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:33.922869   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:33.959004   74039 cri.go:89] found id: ""
	I0914 01:04:33.959033   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.959044   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:33.959050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:33.959107   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:33.994934   74039 cri.go:89] found id: ""
	I0914 01:04:33.994964   74039 logs.go:276] 0 containers: []
	W0914 01:04:33.994975   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:33.994982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:33.995049   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:34.031992   74039 cri.go:89] found id: ""
	I0914 01:04:34.032026   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.032037   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:34.032045   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:34.032097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:34.067529   74039 cri.go:89] found id: ""
	I0914 01:04:34.067563   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.067573   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:34.067581   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:34.067639   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:34.101592   74039 cri.go:89] found id: ""
	I0914 01:04:34.101624   74039 logs.go:276] 0 containers: []
	W0914 01:04:34.101635   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:34.101644   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:34.101654   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:34.153446   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:34.153493   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:34.167755   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:34.167802   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:34.234475   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:34.234511   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:34.234531   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:34.313618   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:34.313663   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:36.854250   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:36.867964   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:36.868040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:36.907041   74039 cri.go:89] found id: ""
	I0914 01:04:36.907070   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.907080   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:36.907086   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:36.907169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:36.940370   74039 cri.go:89] found id: ""
	I0914 01:04:36.940397   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.940406   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:36.940413   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:36.940474   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:36.974153   74039 cri.go:89] found id: ""
	I0914 01:04:36.974183   74039 logs.go:276] 0 containers: []
	W0914 01:04:36.974206   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:36.974225   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:36.974299   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:37.008437   74039 cri.go:89] found id: ""
	I0914 01:04:37.008466   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.008474   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:37.008481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:37.008530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:37.041799   74039 cri.go:89] found id: ""
	I0914 01:04:37.041825   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.041833   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:37.041838   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:37.041885   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:37.076544   74039 cri.go:89] found id: ""
	I0914 01:04:37.076578   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.076586   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:37.076592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:37.076642   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:37.109025   74039 cri.go:89] found id: ""
	I0914 01:04:37.109055   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.109063   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:37.109070   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:37.109140   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:37.141992   74039 cri.go:89] found id: ""
	I0914 01:04:37.142022   74039 logs.go:276] 0 containers: []
	W0914 01:04:37.142030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:37.142039   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:37.142050   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:37.181645   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:37.181679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:37.234623   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:37.234658   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:37.247723   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:37.247751   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:37.313399   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:37.313434   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:37.313449   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:37.567339   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.067474   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:41.694491   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:37.894347   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:40.393892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:39.897480   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:39.911230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:39.911316   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:39.951887   74039 cri.go:89] found id: ""
	I0914 01:04:39.951914   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.951923   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:39.951929   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:39.951976   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:39.987483   74039 cri.go:89] found id: ""
	I0914 01:04:39.987510   74039 logs.go:276] 0 containers: []
	W0914 01:04:39.987526   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:39.987534   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:39.987592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:40.021527   74039 cri.go:89] found id: ""
	I0914 01:04:40.021561   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.021575   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:40.021589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:40.021658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:40.055614   74039 cri.go:89] found id: ""
	I0914 01:04:40.055643   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.055655   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:40.055664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:40.055729   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:40.089902   74039 cri.go:89] found id: ""
	I0914 01:04:40.089928   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.089936   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:40.089942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:40.090003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:40.121145   74039 cri.go:89] found id: ""
	I0914 01:04:40.121175   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.121186   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:40.121194   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:40.121263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:40.156895   74039 cri.go:89] found id: ""
	I0914 01:04:40.156921   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.156928   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:40.156934   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:40.156984   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:40.190086   74039 cri.go:89] found id: ""
	I0914 01:04:40.190118   74039 logs.go:276] 0 containers: []
	W0914 01:04:40.190128   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:40.190139   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:40.190153   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:40.240836   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:40.240872   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:40.254217   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:40.254242   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:40.332218   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:40.332248   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:40.332265   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:40.406253   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:40.406292   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:42.949313   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:42.962931   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:42.962993   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:42.997085   74039 cri.go:89] found id: ""
	I0914 01:04:42.997114   74039 logs.go:276] 0 containers: []
	W0914 01:04:42.997123   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:42.997128   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:42.997179   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:43.034957   74039 cri.go:89] found id: ""
	I0914 01:04:43.034986   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.034994   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:43.035000   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:43.035048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:43.069582   74039 cri.go:89] found id: ""
	I0914 01:04:43.069610   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.069618   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:43.069624   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:43.069677   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:43.102385   74039 cri.go:89] found id: ""
	I0914 01:04:43.102415   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.102426   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:43.102433   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:43.102497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:43.134922   74039 cri.go:89] found id: ""
	I0914 01:04:43.134954   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.134965   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:43.134980   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:43.135041   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:43.169452   74039 cri.go:89] found id: ""
	I0914 01:04:43.169475   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.169483   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:43.169489   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:43.169533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:43.203629   74039 cri.go:89] found id: ""
	I0914 01:04:43.203652   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.203659   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:43.203665   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:43.203718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:43.239332   74039 cri.go:89] found id: ""
	I0914 01:04:43.239356   74039 logs.go:276] 0 containers: []
	W0914 01:04:43.239365   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:43.239373   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:43.239383   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:43.318427   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:43.318466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:43.356416   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:43.356445   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:43.411540   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:43.411581   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:43.425160   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:43.425187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:43.493950   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:42.565617   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.567347   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:43.694807   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:46.195054   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:42.893306   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:44.894899   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:45.995034   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:46.008130   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:46.008194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:46.041180   74039 cri.go:89] found id: ""
	I0914 01:04:46.041204   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.041212   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:46.041218   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:46.041267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:46.082737   74039 cri.go:89] found id: ""
	I0914 01:04:46.082766   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.082782   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:46.082788   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:46.082847   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:46.115669   74039 cri.go:89] found id: ""
	I0914 01:04:46.115697   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.115705   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:46.115710   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:46.115774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:46.149029   74039 cri.go:89] found id: ""
	I0914 01:04:46.149067   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.149077   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:46.149103   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:46.149174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:46.182765   74039 cri.go:89] found id: ""
	I0914 01:04:46.182797   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.182805   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:46.182812   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:46.182868   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:46.216119   74039 cri.go:89] found id: ""
	I0914 01:04:46.216152   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.216165   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:46.216172   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:46.216226   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:46.248652   74039 cri.go:89] found id: ""
	I0914 01:04:46.248681   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.248691   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:46.248699   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:46.248759   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:46.281475   74039 cri.go:89] found id: ""
	I0914 01:04:46.281509   74039 logs.go:276] 0 containers: []
	W0914 01:04:46.281519   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:46.281529   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:46.281542   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:46.334678   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:46.334716   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:46.347416   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:46.347441   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:46.420748   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:46.420778   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:46.420797   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:46.500538   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:46.500576   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:47.067532   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.566429   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:48.693747   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:50.694193   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:47.393928   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.394436   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:51.894317   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:49.042910   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:49.055575   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:49.055658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:49.089380   74039 cri.go:89] found id: ""
	I0914 01:04:49.089407   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.089415   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:49.089421   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:49.089475   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:49.122855   74039 cri.go:89] found id: ""
	I0914 01:04:49.122890   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.122901   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:49.122909   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:49.122966   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:49.156383   74039 cri.go:89] found id: ""
	I0914 01:04:49.156410   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.156422   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:49.156429   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:49.156493   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:49.189875   74039 cri.go:89] found id: ""
	I0914 01:04:49.189904   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.189914   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:49.189923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:49.189981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:49.223557   74039 cri.go:89] found id: ""
	I0914 01:04:49.223590   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.223599   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:49.223605   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:49.223658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:49.262317   74039 cri.go:89] found id: ""
	I0914 01:04:49.262343   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.262351   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:49.262357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:49.262424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:49.295936   74039 cri.go:89] found id: ""
	I0914 01:04:49.295959   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.295968   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:49.295973   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:49.296024   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:49.338138   74039 cri.go:89] found id: ""
	I0914 01:04:49.338162   74039 logs.go:276] 0 containers: []
	W0914 01:04:49.338187   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:49.338197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:49.338209   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:49.351519   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:49.351554   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:49.418486   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:49.418513   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:49.418530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:49.495877   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:49.495916   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:49.534330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:49.534376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.088982   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:52.103357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:52.103420   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:52.136360   74039 cri.go:89] found id: ""
	I0914 01:04:52.136393   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.136406   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:52.136413   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:52.136485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:52.172068   74039 cri.go:89] found id: ""
	I0914 01:04:52.172095   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.172105   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:52.172113   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:52.172169   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:52.207502   74039 cri.go:89] found id: ""
	I0914 01:04:52.207525   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.207538   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:52.207544   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:52.207605   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:52.241505   74039 cri.go:89] found id: ""
	I0914 01:04:52.241544   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.241554   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:52.241563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:52.241627   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:52.274017   74039 cri.go:89] found id: ""
	I0914 01:04:52.274047   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.274059   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:52.274067   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:52.274125   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:52.307959   74039 cri.go:89] found id: ""
	I0914 01:04:52.307987   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.307999   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:52.308006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:52.308130   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:52.341856   74039 cri.go:89] found id: ""
	I0914 01:04:52.341878   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.341886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:52.341894   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:52.341943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:52.374903   74039 cri.go:89] found id: ""
	I0914 01:04:52.374926   74039 logs.go:276] 0 containers: []
	W0914 01:04:52.374934   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:52.374942   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:52.374954   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:52.427616   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:52.427656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:52.455508   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:52.455543   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:52.533958   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:52.533979   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:52.533992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:52.615588   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:52.615632   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:52.065964   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.565120   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.566307   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:53.193540   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.193894   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:57.195563   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:54.393095   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:56.393476   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:55.156043   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:55.169486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:55.169580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:55.205969   74039 cri.go:89] found id: ""
	I0914 01:04:55.206003   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.206015   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:55.206021   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:55.206083   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:55.239469   74039 cri.go:89] found id: ""
	I0914 01:04:55.239503   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.239512   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:55.239520   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:55.239573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:55.272250   74039 cri.go:89] found id: ""
	I0914 01:04:55.272297   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.272308   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:55.272318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:55.272379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:55.308759   74039 cri.go:89] found id: ""
	I0914 01:04:55.308794   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.308814   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:55.308825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:55.308892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:55.343248   74039 cri.go:89] found id: ""
	I0914 01:04:55.343275   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.343286   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:55.343293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:55.343358   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:55.378199   74039 cri.go:89] found id: ""
	I0914 01:04:55.378228   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.378237   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:55.378244   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:55.378309   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:55.412386   74039 cri.go:89] found id: ""
	I0914 01:04:55.412414   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.412424   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:55.412431   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:55.412497   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:55.447222   74039 cri.go:89] found id: ""
	I0914 01:04:55.447250   74039 logs.go:276] 0 containers: []
	W0914 01:04:55.447260   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:55.447270   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:55.447282   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:55.516038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:55.516069   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:55.516082   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:55.603711   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:55.603759   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:55.645508   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:55.645545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:55.696982   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:55.697018   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.212133   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:04:58.225032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:04:58.225098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:04:58.263672   74039 cri.go:89] found id: ""
	I0914 01:04:58.263700   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.263707   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:04:58.263713   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:04:58.263758   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:04:58.300600   74039 cri.go:89] found id: ""
	I0914 01:04:58.300633   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.300644   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:04:58.300651   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:04:58.300715   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:04:58.335755   74039 cri.go:89] found id: ""
	I0914 01:04:58.335804   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.335815   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:04:58.335823   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:04:58.335890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:04:58.369462   74039 cri.go:89] found id: ""
	I0914 01:04:58.369505   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.369517   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:04:58.369525   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:04:58.369586   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:04:58.406092   74039 cri.go:89] found id: ""
	I0914 01:04:58.406118   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.406128   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:04:58.406136   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:04:58.406198   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:04:58.441055   74039 cri.go:89] found id: ""
	I0914 01:04:58.441080   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.441088   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:04:58.441094   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:04:58.441147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:04:58.475679   74039 cri.go:89] found id: ""
	I0914 01:04:58.475717   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.475729   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:04:58.475748   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:04:58.475841   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:04:58.514387   74039 cri.go:89] found id: ""
	I0914 01:04:58.514418   74039 logs.go:276] 0 containers: []
	W0914 01:04:58.514428   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:04:58.514439   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:04:58.514453   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:04:58.568807   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:04:58.568835   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:04:58.583042   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:04:58.583068   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:04:58.648448   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:04:58.648476   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:04:58.648492   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:04:58.731772   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:04:58.731832   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:04:58.567653   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.066104   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:59.694013   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.695213   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:04:58.394279   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:00.893367   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:01.270266   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:01.283866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:01.283943   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:01.319934   74039 cri.go:89] found id: ""
	I0914 01:05:01.319966   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.319978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:01.319986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:01.320048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:01.354253   74039 cri.go:89] found id: ""
	I0914 01:05:01.354283   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.354294   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:01.354307   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:01.354372   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:01.387801   74039 cri.go:89] found id: ""
	I0914 01:05:01.387831   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.387842   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:01.387849   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:01.387908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:01.420556   74039 cri.go:89] found id: ""
	I0914 01:05:01.420577   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.420586   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:01.420591   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:01.420635   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:01.451041   74039 cri.go:89] found id: ""
	I0914 01:05:01.451069   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.451079   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:01.451086   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:01.451146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:01.488914   74039 cri.go:89] found id: ""
	I0914 01:05:01.488943   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.488954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:01.488961   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:01.489021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:01.523552   74039 cri.go:89] found id: ""
	I0914 01:05:01.523579   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.523586   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:01.523592   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:01.523654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:01.556415   74039 cri.go:89] found id: ""
	I0914 01:05:01.556442   74039 logs.go:276] 0 containers: []
	W0914 01:05:01.556463   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:01.556473   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:01.556486   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:01.632370   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:01.632404   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:01.670443   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:01.670472   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:01.721190   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:01.721226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:01.734743   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:01.734770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:01.802005   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:03.565725   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.065457   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.193655   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:06.194690   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:03.394185   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:05.893897   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:04.302289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:04.314866   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:04.314942   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:04.347792   74039 cri.go:89] found id: ""
	I0914 01:05:04.347819   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.347830   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:04.347837   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:04.347893   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:04.387691   74039 cri.go:89] found id: ""
	I0914 01:05:04.387717   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.387726   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:04.387731   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:04.387777   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:04.423515   74039 cri.go:89] found id: ""
	I0914 01:05:04.423546   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.423558   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:04.423567   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:04.423626   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:04.458054   74039 cri.go:89] found id: ""
	I0914 01:05:04.458080   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.458088   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:04.458095   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:04.458152   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:04.494318   74039 cri.go:89] found id: ""
	I0914 01:05:04.494346   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.494354   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:04.494359   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:04.494408   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:04.527464   74039 cri.go:89] found id: ""
	I0914 01:05:04.527487   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.527495   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:04.527502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:04.527548   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:04.562913   74039 cri.go:89] found id: ""
	I0914 01:05:04.562940   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.562949   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:04.562954   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:04.563010   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:04.604853   74039 cri.go:89] found id: ""
	I0914 01:05:04.604876   74039 logs.go:276] 0 containers: []
	W0914 01:05:04.604885   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:04.604895   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:04.604910   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:04.679649   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:04.679691   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:04.699237   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:04.699288   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:04.762809   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:04.762837   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:04.762857   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:04.840299   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:04.840341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.387038   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:07.402107   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:07.402177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:07.434551   74039 cri.go:89] found id: ""
	I0914 01:05:07.434577   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.434585   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:07.434591   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:07.434658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:07.468745   74039 cri.go:89] found id: ""
	I0914 01:05:07.468769   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.468777   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:07.468783   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:07.468833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:07.502871   74039 cri.go:89] found id: ""
	I0914 01:05:07.502898   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.502909   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:07.502917   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:07.502982   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:07.539823   74039 cri.go:89] found id: ""
	I0914 01:05:07.539848   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.539856   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:07.539862   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:07.539911   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:07.574885   74039 cri.go:89] found id: ""
	I0914 01:05:07.574911   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.574919   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:07.574926   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:07.574981   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:07.609493   74039 cri.go:89] found id: ""
	I0914 01:05:07.609523   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.609540   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:07.609549   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:07.609597   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:07.644511   74039 cri.go:89] found id: ""
	I0914 01:05:07.644547   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.644557   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:07.644568   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:07.644618   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:07.679231   74039 cri.go:89] found id: ""
	I0914 01:05:07.679256   74039 logs.go:276] 0 containers: []
	W0914 01:05:07.679266   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:07.679277   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:07.679291   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:07.752430   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:07.752451   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:07.752466   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:07.830011   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:07.830046   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:07.868053   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:07.868089   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:07.920606   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:07.920638   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:08.065720   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.066411   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:08.693101   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.695217   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:07.898263   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.393593   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:10.435092   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:10.448023   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:10.448082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:10.488092   74039 cri.go:89] found id: ""
	I0914 01:05:10.488121   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.488129   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:10.488134   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:10.488197   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:10.524477   74039 cri.go:89] found id: ""
	I0914 01:05:10.524506   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.524516   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:10.524522   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:10.524574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:10.561271   74039 cri.go:89] found id: ""
	I0914 01:05:10.561304   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.561311   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:10.561317   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:10.561376   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:10.597237   74039 cri.go:89] found id: ""
	I0914 01:05:10.597263   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.597279   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:10.597287   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:10.597345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:10.633896   74039 cri.go:89] found id: ""
	I0914 01:05:10.633923   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.633934   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:10.633942   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:10.633996   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:10.673169   74039 cri.go:89] found id: ""
	I0914 01:05:10.673200   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.673208   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:10.673214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:10.673260   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:10.710425   74039 cri.go:89] found id: ""
	I0914 01:05:10.710464   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.710474   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:10.710481   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:10.710549   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:10.743732   74039 cri.go:89] found id: ""
	I0914 01:05:10.743754   74039 logs.go:276] 0 containers: []
	W0914 01:05:10.743762   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:10.743769   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:10.743780   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:10.810190   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:10.810211   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:10.810226   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:10.892637   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:10.892682   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:10.934536   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:10.934564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:10.988526   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:10.988563   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.502853   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:13.516583   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:13.516660   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:13.550387   74039 cri.go:89] found id: ""
	I0914 01:05:13.550423   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.550435   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:13.550442   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:13.550508   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:13.591653   74039 cri.go:89] found id: ""
	I0914 01:05:13.591676   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.591684   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:13.591689   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:13.591734   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:13.624253   74039 cri.go:89] found id: ""
	I0914 01:05:13.624279   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.624287   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:13.624293   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:13.624347   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:13.662624   74039 cri.go:89] found id: ""
	I0914 01:05:13.662658   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.662670   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:13.662677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:13.662741   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:13.698635   74039 cri.go:89] found id: ""
	I0914 01:05:13.698664   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.698671   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:13.698677   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:13.698740   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:13.733287   74039 cri.go:89] found id: ""
	I0914 01:05:13.733336   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.733346   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:13.733353   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:13.733424   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:13.771570   74039 cri.go:89] found id: ""
	I0914 01:05:13.771593   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.771607   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:13.771615   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:13.771670   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:13.805230   74039 cri.go:89] found id: ""
	I0914 01:05:13.805262   74039 logs.go:276] 0 containers: []
	W0914 01:05:13.805288   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:13.805299   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:13.805312   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:13.883652   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:13.883688   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:12.565180   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.565379   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.566488   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.193501   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:15.695601   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:12.393686   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:14.893697   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:16.894778   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:13.923857   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:13.923893   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:13.974155   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:13.974192   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:13.987466   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:13.987495   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:14.054113   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.554257   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:16.567069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:16.567147   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:16.601881   74039 cri.go:89] found id: ""
	I0914 01:05:16.601906   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.601914   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:16.601921   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:16.601971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:16.637700   74039 cri.go:89] found id: ""
	I0914 01:05:16.637725   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.637735   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:16.637742   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:16.637833   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:16.671851   74039 cri.go:89] found id: ""
	I0914 01:05:16.671879   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.671888   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:16.671896   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:16.671957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:16.706186   74039 cri.go:89] found id: ""
	I0914 01:05:16.706211   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.706219   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:16.706224   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:16.706272   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:16.748441   74039 cri.go:89] found id: ""
	I0914 01:05:16.748468   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.748478   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:16.748486   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:16.748546   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:16.781586   74039 cri.go:89] found id: ""
	I0914 01:05:16.781617   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.781626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:16.781632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:16.781692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:16.818612   74039 cri.go:89] found id: ""
	I0914 01:05:16.818635   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.818643   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:16.818649   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:16.818708   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:16.855773   74039 cri.go:89] found id: ""
	I0914 01:05:16.855825   74039 logs.go:276] 0 containers: []
	W0914 01:05:16.855839   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:16.855850   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:16.855870   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:16.869354   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:16.869385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:16.945938   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:16.945960   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:16.945976   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:17.025568   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:17.025609   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:17.064757   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:17.064788   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:18.567635   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.066213   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:18.193783   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:20.193947   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.393002   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:21.393386   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:19.621156   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:19.634825   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:19.634890   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:19.675448   74039 cri.go:89] found id: ""
	I0914 01:05:19.675476   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.675484   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:19.675490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:19.675551   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:19.709459   74039 cri.go:89] found id: ""
	I0914 01:05:19.709491   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.709500   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:19.709505   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:19.709562   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:19.740982   74039 cri.go:89] found id: ""
	I0914 01:05:19.741007   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.741014   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:19.741020   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:19.741063   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:19.773741   74039 cri.go:89] found id: ""
	I0914 01:05:19.773769   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.773777   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:19.773783   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:19.773834   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:19.807688   74039 cri.go:89] found id: ""
	I0914 01:05:19.807721   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.807732   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:19.807740   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:19.807820   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:19.840294   74039 cri.go:89] found id: ""
	I0914 01:05:19.840319   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.840330   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:19.840339   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:19.840403   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:19.875193   74039 cri.go:89] found id: ""
	I0914 01:05:19.875233   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.875245   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:19.875255   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:19.875317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:19.909931   74039 cri.go:89] found id: ""
	I0914 01:05:19.909964   74039 logs.go:276] 0 containers: []
	W0914 01:05:19.909974   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:19.909985   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:19.909998   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:19.992896   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:19.992942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:20.030238   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:20.030266   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:20.084506   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:20.084546   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:20.098712   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:20.098756   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:20.170038   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:22.670292   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:22.682832   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:22.682908   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:22.717591   74039 cri.go:89] found id: ""
	I0914 01:05:22.717616   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.717626   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:22.717634   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:22.717693   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:22.750443   74039 cri.go:89] found id: ""
	I0914 01:05:22.750472   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.750484   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:22.750490   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:22.750560   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:22.785674   74039 cri.go:89] found id: ""
	I0914 01:05:22.785703   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.785715   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:22.785722   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:22.785785   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:22.819578   74039 cri.go:89] found id: ""
	I0914 01:05:22.819604   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.819612   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:22.819618   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:22.819665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:22.851315   74039 cri.go:89] found id: ""
	I0914 01:05:22.851370   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.851382   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:22.851389   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:22.851452   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:22.886576   74039 cri.go:89] found id: ""
	I0914 01:05:22.886605   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.886617   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:22.886625   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:22.886686   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:22.921356   74039 cri.go:89] found id: ""
	I0914 01:05:22.921386   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.921396   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:22.921404   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:22.921456   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:22.953931   74039 cri.go:89] found id: ""
	I0914 01:05:22.953963   74039 logs.go:276] 0 containers: []
	W0914 01:05:22.953975   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:22.953986   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:22.954002   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:23.009046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:23.009083   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:23.022420   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:23.022454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:23.094225   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:23.094264   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:23.094280   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:23.172161   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:23.172198   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:23.066564   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.566361   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:22.693893   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:24.694211   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:26.694440   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:23.892517   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.893449   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:25.712904   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:25.725632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:25.725711   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:25.758289   74039 cri.go:89] found id: ""
	I0914 01:05:25.758334   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.758345   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:25.758352   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:25.758414   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:25.791543   74039 cri.go:89] found id: ""
	I0914 01:05:25.791568   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.791577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:25.791582   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:25.791628   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:25.824867   74039 cri.go:89] found id: ""
	I0914 01:05:25.824894   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.824902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:25.824909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:25.824967   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:25.859195   74039 cri.go:89] found id: ""
	I0914 01:05:25.859229   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.859242   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:25.859250   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:25.859319   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:25.893419   74039 cri.go:89] found id: ""
	I0914 01:05:25.893447   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.893457   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:25.893464   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:25.893530   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:25.929617   74039 cri.go:89] found id: ""
	I0914 01:05:25.929641   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.929651   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:25.929658   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:25.929718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:25.963092   74039 cri.go:89] found id: ""
	I0914 01:05:25.963118   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.963126   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:25.963132   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:25.963187   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:25.997516   74039 cri.go:89] found id: ""
	I0914 01:05:25.997539   74039 logs.go:276] 0 containers: []
	W0914 01:05:25.997547   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:25.997560   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:25.997571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:26.010538   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:26.010571   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:26.079534   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:26.079565   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:26.079577   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:26.162050   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:26.162090   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:26.202102   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:26.202137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:28.755662   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:28.769050   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:28.769138   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:28.806008   74039 cri.go:89] found id: ""
	I0914 01:05:28.806030   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.806038   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:28.806043   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:28.806092   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:28.843006   74039 cri.go:89] found id: ""
	I0914 01:05:28.843034   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.843042   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:28.843048   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:28.843097   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:28.886912   74039 cri.go:89] found id: ""
	I0914 01:05:28.886938   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.886946   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:28.886951   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:28.887008   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:28.066461   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.565957   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.695271   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:31.193957   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:27.893855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:30.392689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:28.922483   74039 cri.go:89] found id: ""
	I0914 01:05:28.922510   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.922527   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:28.922535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:28.922600   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:28.957282   74039 cri.go:89] found id: ""
	I0914 01:05:28.957305   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.957313   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:28.957318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:28.957367   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:28.991991   74039 cri.go:89] found id: ""
	I0914 01:05:28.992017   74039 logs.go:276] 0 containers: []
	W0914 01:05:28.992026   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:28.992032   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:28.992098   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:29.030281   74039 cri.go:89] found id: ""
	I0914 01:05:29.030394   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.030412   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:29.030420   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:29.030486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:29.063802   74039 cri.go:89] found id: ""
	I0914 01:05:29.063833   74039 logs.go:276] 0 containers: []
	W0914 01:05:29.063844   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:29.063854   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:29.063868   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:29.135507   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:29.135532   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:29.135544   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:29.215271   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:29.215305   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:29.254232   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:29.254263   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:29.312159   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:29.312194   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:31.828047   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:31.840143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:31.840204   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:31.875572   74039 cri.go:89] found id: ""
	I0914 01:05:31.875597   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.875605   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:31.875611   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:31.875654   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:31.909486   74039 cri.go:89] found id: ""
	I0914 01:05:31.909521   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.909532   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:31.909540   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:31.909603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:31.942871   74039 cri.go:89] found id: ""
	I0914 01:05:31.942905   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.942915   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:31.942923   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:31.942988   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:31.979375   74039 cri.go:89] found id: ""
	I0914 01:05:31.979405   74039 logs.go:276] 0 containers: []
	W0914 01:05:31.979416   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:31.979423   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:31.979483   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:32.014826   74039 cri.go:89] found id: ""
	I0914 01:05:32.014852   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.014863   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:32.014870   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:32.014928   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:32.049241   74039 cri.go:89] found id: ""
	I0914 01:05:32.049276   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.049288   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:32.049295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:32.049353   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:32.084606   74039 cri.go:89] found id: ""
	I0914 01:05:32.084636   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.084647   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:32.084655   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:32.084718   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:32.117195   74039 cri.go:89] found id: ""
	I0914 01:05:32.117218   74039 logs.go:276] 0 containers: []
	W0914 01:05:32.117226   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:32.117234   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:32.117247   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:32.172294   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:32.172340   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:32.185484   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:32.185514   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:32.257205   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:32.257226   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:32.257241   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:32.335350   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:32.335389   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:33.066203   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.067091   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:33.194304   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:35.195621   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:32.393170   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.394047   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:36.894046   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:34.878278   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:34.893127   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:34.893217   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:34.927898   74039 cri.go:89] found id: ""
	I0914 01:05:34.927926   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.927934   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:34.927944   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:34.928001   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:34.962577   74039 cri.go:89] found id: ""
	I0914 01:05:34.962605   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.962616   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:34.962624   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:34.962682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:34.998972   74039 cri.go:89] found id: ""
	I0914 01:05:34.999001   74039 logs.go:276] 0 containers: []
	W0914 01:05:34.999012   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:34.999019   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:34.999082   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:35.034191   74039 cri.go:89] found id: ""
	I0914 01:05:35.034220   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.034231   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:35.034238   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:35.034304   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:35.072023   74039 cri.go:89] found id: ""
	I0914 01:05:35.072069   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.072080   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:35.072091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:35.072157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:35.111646   74039 cri.go:89] found id: ""
	I0914 01:05:35.111675   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.111686   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:35.111694   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:35.111761   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:35.148171   74039 cri.go:89] found id: ""
	I0914 01:05:35.148200   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.148210   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:35.148217   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:35.148302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:35.181997   74039 cri.go:89] found id: ""
	I0914 01:05:35.182028   74039 logs.go:276] 0 containers: []
	W0914 01:05:35.182040   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:35.182051   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:35.182064   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:35.235858   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:35.235897   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:35.249836   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:35.249869   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:35.321011   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:35.321040   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:35.321055   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:35.402937   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:35.402981   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:37.946729   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:37.961270   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:37.961345   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:37.994559   74039 cri.go:89] found id: ""
	I0914 01:05:37.994584   74039 logs.go:276] 0 containers: []
	W0914 01:05:37.994598   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:37.994606   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:37.994667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:38.034331   74039 cri.go:89] found id: ""
	I0914 01:05:38.034355   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.034362   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:38.034368   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:38.034427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:38.069592   74039 cri.go:89] found id: ""
	I0914 01:05:38.069620   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.069628   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:38.069634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:38.069690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:38.104160   74039 cri.go:89] found id: ""
	I0914 01:05:38.104189   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.104202   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:38.104210   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:38.104268   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:38.138225   74039 cri.go:89] found id: ""
	I0914 01:05:38.138252   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.138260   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:38.138265   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:38.138317   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:38.173624   74039 cri.go:89] found id: ""
	I0914 01:05:38.173653   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.173661   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:38.173667   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:38.173728   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:38.210462   74039 cri.go:89] found id: ""
	I0914 01:05:38.210489   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.210497   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:38.210502   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:38.210561   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:38.242807   74039 cri.go:89] found id: ""
	I0914 01:05:38.242840   74039 logs.go:276] 0 containers: []
	W0914 01:05:38.242851   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:38.242866   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:38.242880   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:38.306810   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:38.306832   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:38.306847   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:38.388776   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:38.388817   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:38.441211   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:38.441251   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:38.501169   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:38.501211   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:37.565571   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.065868   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:37.693813   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:40.194073   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:42.194463   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:39.393110   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.893075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:41.017439   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:41.029890   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:41.029959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:41.065973   74039 cri.go:89] found id: ""
	I0914 01:05:41.065998   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.066006   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:41.066011   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:41.066068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:41.106921   74039 cri.go:89] found id: ""
	I0914 01:05:41.106950   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.106958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:41.106964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:41.107022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:41.142233   74039 cri.go:89] found id: ""
	I0914 01:05:41.142270   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.142284   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:41.142291   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:41.142355   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:41.175659   74039 cri.go:89] found id: ""
	I0914 01:05:41.175686   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.175698   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:41.175705   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:41.175774   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:41.210061   74039 cri.go:89] found id: ""
	I0914 01:05:41.210099   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.210107   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:41.210113   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:41.210161   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:41.241964   74039 cri.go:89] found id: ""
	I0914 01:05:41.241992   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.242000   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:41.242005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:41.242052   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:41.276014   74039 cri.go:89] found id: ""
	I0914 01:05:41.276040   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.276048   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:41.276055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:41.276116   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:41.311944   74039 cri.go:89] found id: ""
	I0914 01:05:41.311973   74039 logs.go:276] 0 containers: []
	W0914 01:05:41.311984   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:41.311995   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:41.312009   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:41.365415   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:41.365454   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:41.379718   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:41.379750   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:41.448265   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:41.448287   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:41.448298   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:41.526090   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:41.526128   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:42.565963   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.566441   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.693112   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.694016   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:43.893374   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:46.393919   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:44.064498   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:44.077296   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:44.077375   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:44.112930   74039 cri.go:89] found id: ""
	I0914 01:05:44.112965   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.112978   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:44.112986   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:44.113037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:44.151873   74039 cri.go:89] found id: ""
	I0914 01:05:44.151900   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.151910   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:44.151916   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:44.151970   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:44.203849   74039 cri.go:89] found id: ""
	I0914 01:05:44.203878   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.203889   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:44.203897   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:44.203955   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:44.251556   74039 cri.go:89] found id: ""
	I0914 01:05:44.251585   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.251596   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:44.251604   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:44.251667   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:44.293496   74039 cri.go:89] found id: ""
	I0914 01:05:44.293522   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.293530   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:44.293536   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:44.293603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:44.334286   74039 cri.go:89] found id: ""
	I0914 01:05:44.334328   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.334342   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:44.334350   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:44.334413   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:44.370759   74039 cri.go:89] found id: ""
	I0914 01:05:44.370785   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.370796   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:44.370804   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:44.370865   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:44.407629   74039 cri.go:89] found id: ""
	I0914 01:05:44.407654   74039 logs.go:276] 0 containers: []
	W0914 01:05:44.407661   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:44.407668   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:44.407679   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:44.461244   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:44.461289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:44.474951   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:44.474980   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:44.541752   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:44.541771   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:44.541781   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:44.618163   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:44.618201   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.157770   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:47.169889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:47.169975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:47.203273   74039 cri.go:89] found id: ""
	I0914 01:05:47.203302   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.203311   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:47.203317   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:47.203361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:47.236577   74039 cri.go:89] found id: ""
	I0914 01:05:47.236603   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.236611   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:47.236617   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:47.236669   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:47.269762   74039 cri.go:89] found id: ""
	I0914 01:05:47.269789   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.269797   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:47.269803   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:47.269859   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:47.305330   74039 cri.go:89] found id: ""
	I0914 01:05:47.305359   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.305370   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:47.305377   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:47.305428   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:47.342167   74039 cri.go:89] found id: ""
	I0914 01:05:47.342212   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.342221   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:47.342227   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:47.342285   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:47.375313   74039 cri.go:89] found id: ""
	I0914 01:05:47.375344   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.375355   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:47.375362   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:47.375427   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:47.417822   74039 cri.go:89] found id: ""
	I0914 01:05:47.417855   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.417867   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:47.417874   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:47.417932   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:47.450591   74039 cri.go:89] found id: ""
	I0914 01:05:47.450620   74039 logs.go:276] 0 containers: []
	W0914 01:05:47.450631   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:47.450642   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:47.450656   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:47.464002   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:47.464030   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:47.537620   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:47.537647   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:47.537661   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:47.613691   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:47.613730   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:47.654798   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:47.654830   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:47.066399   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:49.565328   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.694845   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.695428   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:48.394121   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.394774   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:50.208153   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:50.220982   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:50.221048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:50.253642   74039 cri.go:89] found id: ""
	I0914 01:05:50.253670   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.253679   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:50.253687   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:50.253745   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:50.285711   74039 cri.go:89] found id: ""
	I0914 01:05:50.285738   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.285750   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:50.285757   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:50.285817   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:50.327750   74039 cri.go:89] found id: ""
	I0914 01:05:50.327796   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.327809   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:50.327817   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:50.327878   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:50.367338   74039 cri.go:89] found id: ""
	I0914 01:05:50.367366   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.367377   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:50.367384   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:50.367445   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:50.403738   74039 cri.go:89] found id: ""
	I0914 01:05:50.403760   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.403767   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:50.403780   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:50.403853   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:50.437591   74039 cri.go:89] found id: ""
	I0914 01:05:50.437620   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.437627   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:50.437634   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:50.437682   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:50.471267   74039 cri.go:89] found id: ""
	I0914 01:05:50.471314   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.471322   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:50.471328   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:50.471378   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:50.505887   74039 cri.go:89] found id: ""
	I0914 01:05:50.505912   74039 logs.go:276] 0 containers: []
	W0914 01:05:50.505920   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:50.505928   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:50.505943   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:50.556261   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:50.556311   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:50.573153   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:50.573190   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:50.696820   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:50.696847   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:50.696866   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:50.771752   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:50.771799   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.315679   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:53.328418   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:53.328486   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:53.363613   74039 cri.go:89] found id: ""
	I0914 01:05:53.363644   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.363654   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:53.363662   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:53.363749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:53.402925   74039 cri.go:89] found id: ""
	I0914 01:05:53.402955   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.402969   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:53.402975   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:53.403022   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:53.436108   74039 cri.go:89] found id: ""
	I0914 01:05:53.436133   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.436142   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:53.436147   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:53.436194   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:53.473780   74039 cri.go:89] found id: ""
	I0914 01:05:53.473811   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.473822   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:53.473829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:53.473891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:53.507467   74039 cri.go:89] found id: ""
	I0914 01:05:53.507492   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.507500   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:53.507506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:53.507566   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:53.541098   74039 cri.go:89] found id: ""
	I0914 01:05:53.541132   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.541142   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:53.541157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:53.541218   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:53.575959   74039 cri.go:89] found id: ""
	I0914 01:05:53.575990   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.576001   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:53.576008   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:53.576068   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:53.612044   74039 cri.go:89] found id: ""
	I0914 01:05:53.612074   74039 logs.go:276] 0 containers: []
	W0914 01:05:53.612085   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:53.612096   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:53.612109   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:53.624883   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:53.624920   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:53.695721   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:53.695748   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:53.695765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:53.779488   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:53.779524   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:53.819712   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:53.819738   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:52.065121   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.065615   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.565701   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:53.193099   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:55.193773   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:57.194482   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:52.893027   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:54.893978   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.894592   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:56.373496   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:56.398295   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:56.398379   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:56.450505   74039 cri.go:89] found id: ""
	I0914 01:05:56.450534   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.450542   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:56.450549   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:56.450616   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:56.483829   74039 cri.go:89] found id: ""
	I0914 01:05:56.483859   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.483871   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:56.483878   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:56.483944   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:56.519946   74039 cri.go:89] found id: ""
	I0914 01:05:56.519975   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.519986   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:56.519994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:56.520056   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:56.556581   74039 cri.go:89] found id: ""
	I0914 01:05:56.556609   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.556617   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:56.556623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:56.556674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:56.591599   74039 cri.go:89] found id: ""
	I0914 01:05:56.591624   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.591633   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:56.591639   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:56.591696   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:56.623328   74039 cri.go:89] found id: ""
	I0914 01:05:56.623358   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.623369   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:56.623375   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:56.623423   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:56.659064   74039 cri.go:89] found id: ""
	I0914 01:05:56.659096   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.659104   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:56.659109   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:56.659167   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:56.693468   74039 cri.go:89] found id: ""
	I0914 01:05:56.693490   74039 logs.go:276] 0 containers: []
	W0914 01:05:56.693497   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:56.693508   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:56.693521   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:05:56.728586   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:56.728611   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:56.782279   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:56.782327   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:56.796681   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:56.796710   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:56.865061   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:56.865085   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:56.865101   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:58.565914   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.066609   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.695333   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:02.193586   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.393734   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:01.893193   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:05:59.450095   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:05:59.463071   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:05:59.463139   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:05:59.500307   74039 cri.go:89] found id: ""
	I0914 01:05:59.500339   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.500350   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:05:59.500359   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:05:59.500426   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:05:59.535149   74039 cri.go:89] found id: ""
	I0914 01:05:59.535178   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.535190   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:05:59.535197   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:05:59.535261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:05:59.568263   74039 cri.go:89] found id: ""
	I0914 01:05:59.568288   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.568298   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:05:59.568304   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:05:59.568361   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:05:59.607364   74039 cri.go:89] found id: ""
	I0914 01:05:59.607395   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.607405   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:05:59.607413   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:05:59.607477   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:05:59.641792   74039 cri.go:89] found id: ""
	I0914 01:05:59.641818   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.641826   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:05:59.641831   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:05:59.641892   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:05:59.676563   74039 cri.go:89] found id: ""
	I0914 01:05:59.676593   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.676603   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:05:59.676611   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:05:59.676674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:05:59.709843   74039 cri.go:89] found id: ""
	I0914 01:05:59.709868   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.709879   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:05:59.709887   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:05:59.709949   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:05:59.741410   74039 cri.go:89] found id: ""
	I0914 01:05:59.741438   74039 logs.go:276] 0 containers: []
	W0914 01:05:59.741446   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:05:59.741455   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:05:59.741464   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:05:59.793197   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:05:59.793236   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:05:59.807884   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:05:59.807921   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:05:59.875612   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:05:59.875641   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:05:59.875655   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:05:59.952641   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:05:59.952684   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:02.491707   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:02.504758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:02.504843   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:02.539560   74039 cri.go:89] found id: ""
	I0914 01:06:02.539602   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.539614   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:02.539625   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:02.539692   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:02.574967   74039 cri.go:89] found id: ""
	I0914 01:06:02.575008   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.575020   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:02.575031   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:02.575100   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:02.607073   74039 cri.go:89] found id: ""
	I0914 01:06:02.607106   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.607118   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:02.607125   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:02.607177   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:02.641367   74039 cri.go:89] found id: ""
	I0914 01:06:02.641393   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.641401   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:02.641408   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:02.641455   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:02.685782   74039 cri.go:89] found id: ""
	I0914 01:06:02.685811   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.685821   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:02.685829   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:02.685891   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:02.718460   74039 cri.go:89] found id: ""
	I0914 01:06:02.718491   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.718501   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:02.718509   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:02.718573   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:02.752718   74039 cri.go:89] found id: ""
	I0914 01:06:02.752746   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.752754   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:02.752762   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:02.752811   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:02.786096   74039 cri.go:89] found id: ""
	I0914 01:06:02.786126   74039 logs.go:276] 0 containers: []
	W0914 01:06:02.786139   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:02.786150   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:02.786165   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:02.842122   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:02.842160   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:02.856634   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:02.856665   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:02.932414   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:02.932440   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:02.932451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:03.010957   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:03.010991   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:03.566712   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.066671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:04.694370   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.695422   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:03.893789   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:06.392644   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:05.549487   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:05.563357   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:05.563418   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:05.597602   74039 cri.go:89] found id: ""
	I0914 01:06:05.597627   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.597635   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:05.597641   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:05.597699   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:05.633527   74039 cri.go:89] found id: ""
	I0914 01:06:05.633565   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.633577   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:05.633588   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:05.633650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:05.665513   74039 cri.go:89] found id: ""
	I0914 01:06:05.665536   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.665544   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:05.665550   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:05.665602   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:05.702811   74039 cri.go:89] found id: ""
	I0914 01:06:05.702841   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.702852   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:05.702860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:05.702922   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:05.735479   74039 cri.go:89] found id: ""
	I0914 01:06:05.735507   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.735516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:05.735521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:05.735575   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:05.770332   74039 cri.go:89] found id: ""
	I0914 01:06:05.770368   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.770379   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:05.770388   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:05.770454   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:05.807603   74039 cri.go:89] found id: ""
	I0914 01:06:05.807629   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.807637   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:05.807642   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:05.807690   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:05.849671   74039 cri.go:89] found id: ""
	I0914 01:06:05.849699   74039 logs.go:276] 0 containers: []
	W0914 01:06:05.849707   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:05.849716   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:05.849731   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:05.906230   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:05.906262   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:05.920032   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:05.920060   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:05.993884   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:05.993912   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:05.993930   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:06.073096   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:06.073127   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:08.610844   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:08.623239   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:08.623302   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:08.655368   74039 cri.go:89] found id: ""
	I0914 01:06:08.655394   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.655402   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:08.655409   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:08.655476   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:08.688969   74039 cri.go:89] found id: ""
	I0914 01:06:08.688994   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.689004   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:08.689012   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:08.689157   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:08.723419   74039 cri.go:89] found id: ""
	I0914 01:06:08.723441   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.723449   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:08.723455   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:08.723514   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:08.757470   74039 cri.go:89] found id: ""
	I0914 01:06:08.757493   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.757500   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:08.757506   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:08.757580   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:08.791268   74039 cri.go:89] found id: ""
	I0914 01:06:08.791304   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.791312   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:08.791318   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:08.791373   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:08.824906   74039 cri.go:89] found id: ""
	I0914 01:06:08.824946   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.824954   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:08.824960   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:08.825017   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:08.859003   74039 cri.go:89] found id: ""
	I0914 01:06:08.859039   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.859049   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:08.859055   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:08.859104   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:08.896235   74039 cri.go:89] found id: ""
	I0914 01:06:08.896257   74039 logs.go:276] 0 containers: []
	W0914 01:06:08.896265   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:08.896272   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:08.896289   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:08.564906   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.566944   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:09.193902   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:11.693909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.392700   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:10.393226   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:08.910105   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:08.910137   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:08.980728   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:08.980751   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:08.980765   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:09.056032   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:09.056077   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:09.098997   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:09.099022   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.655905   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:11.668403   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:11.668463   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:11.701455   74039 cri.go:89] found id: ""
	I0914 01:06:11.701477   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.701485   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:11.701490   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:11.701543   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:11.732913   74039 cri.go:89] found id: ""
	I0914 01:06:11.732947   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.732958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:11.732967   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:11.733030   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:11.765491   74039 cri.go:89] found id: ""
	I0914 01:06:11.765515   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.765523   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:11.765529   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:11.765584   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:11.799077   74039 cri.go:89] found id: ""
	I0914 01:06:11.799121   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.799135   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:11.799143   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:11.799203   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:11.835388   74039 cri.go:89] found id: ""
	I0914 01:06:11.835419   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.835429   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:11.835437   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:11.835492   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:11.867665   74039 cri.go:89] found id: ""
	I0914 01:06:11.867698   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.867709   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:11.867717   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:11.867812   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:11.904957   74039 cri.go:89] found id: ""
	I0914 01:06:11.904980   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.904988   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:11.904994   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:11.905040   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:11.942389   74039 cri.go:89] found id: ""
	I0914 01:06:11.942414   74039 logs.go:276] 0 containers: []
	W0914 01:06:11.942424   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:11.942434   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:11.942451   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:11.993664   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:11.993705   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:12.008509   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:12.008545   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:12.079277   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:12.079301   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:12.079313   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:12.158146   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:12.158187   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:13.065358   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:15.565938   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.195495   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:16.693699   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:12.893762   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.894075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:14.699243   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:14.711236   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:14.711314   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:14.743705   74039 cri.go:89] found id: ""
	I0914 01:06:14.743729   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.743737   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:14.743742   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:14.743813   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:14.776950   74039 cri.go:89] found id: ""
	I0914 01:06:14.776975   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.776983   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:14.776989   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:14.777036   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:14.810402   74039 cri.go:89] found id: ""
	I0914 01:06:14.810429   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.810437   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:14.810443   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:14.810498   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:14.845499   74039 cri.go:89] found id: ""
	I0914 01:06:14.845533   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.845545   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:14.845553   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:14.845629   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:14.879698   74039 cri.go:89] found id: ""
	I0914 01:06:14.879725   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.879736   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:14.879744   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:14.879829   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:14.919851   74039 cri.go:89] found id: ""
	I0914 01:06:14.919879   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.919891   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:14.919900   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:14.919959   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:14.953954   74039 cri.go:89] found id: ""
	I0914 01:06:14.953980   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.953987   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:14.953992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:14.954038   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:14.987099   74039 cri.go:89] found id: ""
	I0914 01:06:14.987126   74039 logs.go:276] 0 containers: []
	W0914 01:06:14.987134   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:14.987143   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:14.987156   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:15.000959   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:15.000994   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:15.072084   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:15.072108   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:15.072121   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:15.148709   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:15.148746   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:15.185929   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:15.185959   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:17.742815   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:17.756303   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:17.756377   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:17.794785   74039 cri.go:89] found id: ""
	I0914 01:06:17.794811   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.794819   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:17.794824   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:17.794877   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:17.832545   74039 cri.go:89] found id: ""
	I0914 01:06:17.832596   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.832608   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:17.832619   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:17.832676   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:17.872564   74039 cri.go:89] found id: ""
	I0914 01:06:17.872587   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.872595   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:17.872601   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:17.872650   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:17.911396   74039 cri.go:89] found id: ""
	I0914 01:06:17.911425   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.911433   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:17.911439   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:17.911485   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:17.946711   74039 cri.go:89] found id: ""
	I0914 01:06:17.946741   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.946751   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:17.946758   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:17.946831   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:17.979681   74039 cri.go:89] found id: ""
	I0914 01:06:17.979709   74039 logs.go:276] 0 containers: []
	W0914 01:06:17.979719   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:17.979726   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:17.979802   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:18.014272   74039 cri.go:89] found id: ""
	I0914 01:06:18.014313   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.014325   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:18.014334   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:18.014392   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:18.050828   74039 cri.go:89] found id: ""
	I0914 01:06:18.050855   74039 logs.go:276] 0 containers: []
	W0914 01:06:18.050863   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:18.050874   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:18.050884   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:18.092812   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:18.092841   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:18.142795   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:18.142828   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:18.157563   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:18.157588   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:18.233348   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:18.233370   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:18.233381   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:18.065573   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.066438   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.194257   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:21.194293   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:17.394075   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:19.894407   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:20.817023   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:20.829462   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:20.829539   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:20.861912   74039 cri.go:89] found id: ""
	I0914 01:06:20.861941   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.861951   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:20.861959   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:20.862020   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:20.895839   74039 cri.go:89] found id: ""
	I0914 01:06:20.895864   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.895873   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:20.895880   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:20.895941   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:20.933573   74039 cri.go:89] found id: ""
	I0914 01:06:20.933608   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.933617   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:20.933623   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:20.933674   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:20.969849   74039 cri.go:89] found id: ""
	I0914 01:06:20.969875   74039 logs.go:276] 0 containers: []
	W0914 01:06:20.969883   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:20.969889   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:20.969952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:21.005165   74039 cri.go:89] found id: ""
	I0914 01:06:21.005193   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.005200   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:21.005207   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:21.005266   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:21.037593   74039 cri.go:89] found id: ""
	I0914 01:06:21.037617   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.037626   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:21.037632   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:21.037680   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:21.073602   74039 cri.go:89] found id: ""
	I0914 01:06:21.073632   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.073644   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:21.073651   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:21.073714   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:21.107823   74039 cri.go:89] found id: ""
	I0914 01:06:21.107847   74039 logs.go:276] 0 containers: []
	W0914 01:06:21.107854   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:21.107862   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:21.107874   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:21.183501   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:21.183540   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:21.183556   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:21.260339   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:21.260376   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:21.299905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:21.299932   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:21.352871   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:21.352907   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:23.868481   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:23.881664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:23.881749   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:22.566693   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:25.066625   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.695028   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.194308   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:22.393241   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:24.393518   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:26.892962   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:23.919729   74039 cri.go:89] found id: ""
	I0914 01:06:23.919755   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.919763   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:23.919770   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:23.919835   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:23.953921   74039 cri.go:89] found id: ""
	I0914 01:06:23.953949   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.953958   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:23.953964   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:23.954021   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:23.986032   74039 cri.go:89] found id: ""
	I0914 01:06:23.986063   74039 logs.go:276] 0 containers: []
	W0914 01:06:23.986076   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:23.986083   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:23.986146   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:24.020726   74039 cri.go:89] found id: ""
	I0914 01:06:24.020753   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.020764   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:24.020772   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:24.020821   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:24.055853   74039 cri.go:89] found id: ""
	I0914 01:06:24.055878   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.055887   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:24.055892   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:24.055957   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:24.093142   74039 cri.go:89] found id: ""
	I0914 01:06:24.093172   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.093184   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:24.093190   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:24.093253   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:24.131062   74039 cri.go:89] found id: ""
	I0914 01:06:24.131092   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.131103   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:24.131111   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:24.131173   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:24.169202   74039 cri.go:89] found id: ""
	I0914 01:06:24.169251   74039 logs.go:276] 0 containers: []
	W0914 01:06:24.169263   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:24.169273   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:24.169285   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:24.222493   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:24.222532   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:24.237408   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:24.237436   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:24.311923   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:24.311948   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:24.311962   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:24.389227   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:24.389269   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:26.951584   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:26.964596   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:26.964675   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:26.997233   74039 cri.go:89] found id: ""
	I0914 01:06:26.997265   74039 logs.go:276] 0 containers: []
	W0914 01:06:26.997278   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:26.997293   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:26.997357   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:27.032535   74039 cri.go:89] found id: ""
	I0914 01:06:27.032570   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.032582   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:27.032590   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:27.032658   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:27.065947   74039 cri.go:89] found id: ""
	I0914 01:06:27.065974   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.065985   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:27.065992   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:27.066048   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:27.100208   74039 cri.go:89] found id: ""
	I0914 01:06:27.100270   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.100281   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:27.100288   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:27.100340   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:27.133671   74039 cri.go:89] found id: ""
	I0914 01:06:27.133705   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.133714   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:27.133720   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:27.133778   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:27.167403   74039 cri.go:89] found id: ""
	I0914 01:06:27.167433   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.167444   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:27.167452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:27.167517   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:27.201108   74039 cri.go:89] found id: ""
	I0914 01:06:27.201134   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.201145   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:27.201151   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:27.201213   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:27.234560   74039 cri.go:89] found id: ""
	I0914 01:06:27.234587   74039 logs.go:276] 0 containers: []
	W0914 01:06:27.234598   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:27.234608   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:27.234622   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:27.310026   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:27.310061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:27.348905   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:27.348942   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:27.404844   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:27.404883   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:27.418515   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:27.418550   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:27.489558   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:27.565040   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.565397   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.693425   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.694082   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:28.893464   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:30.894855   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:29.990327   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:30.002690   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:30.002757   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:30.038266   74039 cri.go:89] found id: ""
	I0914 01:06:30.038295   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.038304   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:30.038310   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:30.038360   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:30.073706   74039 cri.go:89] found id: ""
	I0914 01:06:30.073737   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.073748   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:30.073755   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:30.073814   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:30.106819   74039 cri.go:89] found id: ""
	I0914 01:06:30.106848   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.106861   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:30.106868   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:30.106934   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:30.142623   74039 cri.go:89] found id: ""
	I0914 01:06:30.142650   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.142661   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:30.142685   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:30.142751   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:30.183380   74039 cri.go:89] found id: ""
	I0914 01:06:30.183404   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.183414   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:30.183421   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:30.183478   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:30.220959   74039 cri.go:89] found id: ""
	I0914 01:06:30.220988   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.220998   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:30.221006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:30.221070   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:30.253672   74039 cri.go:89] found id: ""
	I0914 01:06:30.253705   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.253717   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:30.253724   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:30.253791   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:30.286683   74039 cri.go:89] found id: ""
	I0914 01:06:30.286706   74039 logs.go:276] 0 containers: []
	W0914 01:06:30.286714   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:30.286724   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:30.286733   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:30.337936   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:30.337975   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:30.351202   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:30.351228   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:30.417516   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:30.417541   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:30.417564   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:30.493737   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:30.493774   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:33.035090   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:33.048038   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:33.048129   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:33.086805   74039 cri.go:89] found id: ""
	I0914 01:06:33.086831   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.086842   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:33.086851   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:33.086912   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:33.128182   74039 cri.go:89] found id: ""
	I0914 01:06:33.128213   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.128224   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:33.128232   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:33.128297   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:33.161707   74039 cri.go:89] found id: ""
	I0914 01:06:33.161733   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.161742   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:33.161747   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:33.161805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:33.199843   74039 cri.go:89] found id: ""
	I0914 01:06:33.199866   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.199876   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:33.199884   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:33.199946   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:33.234497   74039 cri.go:89] found id: ""
	I0914 01:06:33.234521   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.234529   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:33.234535   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:33.234592   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:33.266772   74039 cri.go:89] found id: ""
	I0914 01:06:33.266802   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.266813   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:33.266820   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:33.266886   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:33.305843   74039 cri.go:89] found id: ""
	I0914 01:06:33.305873   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.305886   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:33.305893   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:33.305952   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:33.339286   74039 cri.go:89] found id: ""
	I0914 01:06:33.339314   74039 logs.go:276] 0 containers: []
	W0914 01:06:33.339322   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:33.339330   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:33.339341   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:33.390046   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:33.390080   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:33.403169   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:33.403195   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:33.476369   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:33.476395   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:33.476411   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:33.562600   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:33.562647   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:32.065240   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:34.066302   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.565510   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:32.694157   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.194553   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:33.393744   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:35.894318   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:36.101289   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:36.114589   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:36.114645   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:36.148381   74039 cri.go:89] found id: ""
	I0914 01:06:36.148409   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.148420   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:36.148428   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:36.148489   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:36.186469   74039 cri.go:89] found id: ""
	I0914 01:06:36.186498   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.186505   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:36.186511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:36.186558   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:36.223062   74039 cri.go:89] found id: ""
	I0914 01:06:36.223083   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.223091   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:36.223096   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:36.223159   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:36.260182   74039 cri.go:89] found id: ""
	I0914 01:06:36.260211   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.260223   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:36.260230   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:36.260318   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:36.294688   74039 cri.go:89] found id: ""
	I0914 01:06:36.294722   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.294733   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:36.294741   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:36.294805   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:36.334138   74039 cri.go:89] found id: ""
	I0914 01:06:36.334168   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.334180   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:36.334188   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:36.334248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:36.366040   74039 cri.go:89] found id: ""
	I0914 01:06:36.366077   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.366085   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:36.366091   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:36.366154   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:36.401501   74039 cri.go:89] found id: ""
	I0914 01:06:36.401534   74039 logs.go:276] 0 containers: []
	W0914 01:06:36.401544   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:36.401555   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:36.401574   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:36.414359   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:36.414387   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:36.481410   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:36.481432   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:36.481443   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:36.566025   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:36.566061   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:36.607632   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:36.607668   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:38.565797   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.566283   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:37.693678   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.695520   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.194177   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:38.392892   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:40.393067   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:39.162925   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:39.176178   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:39.176267   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:39.210601   74039 cri.go:89] found id: ""
	I0914 01:06:39.210632   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.210641   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:39.210649   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:39.210707   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:39.243993   74039 cri.go:89] found id: ""
	I0914 01:06:39.244025   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.244036   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:39.244044   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:39.244105   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:39.280773   74039 cri.go:89] found id: ""
	I0914 01:06:39.280808   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.280817   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:39.280822   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:39.280870   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:39.314614   74039 cri.go:89] found id: ""
	I0914 01:06:39.314648   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.314658   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:39.314664   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:39.314712   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:39.351957   74039 cri.go:89] found id: ""
	I0914 01:06:39.351987   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.351999   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:39.352006   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:39.352058   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:39.386749   74039 cri.go:89] found id: ""
	I0914 01:06:39.386778   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.386789   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:39.386798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:39.386858   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:39.423965   74039 cri.go:89] found id: ""
	I0914 01:06:39.423991   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.424000   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:39.424005   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:39.424053   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:39.459988   74039 cri.go:89] found id: ""
	I0914 01:06:39.460018   74039 logs.go:276] 0 containers: []
	W0914 01:06:39.460030   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:39.460040   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:39.460052   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:39.510918   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:39.510958   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:39.525189   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:39.525215   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:39.599099   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:39.599126   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:39.599141   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:39.676157   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:39.676197   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:42.221948   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:42.234807   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:42.234887   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:42.267847   74039 cri.go:89] found id: ""
	I0914 01:06:42.267871   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.267879   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:42.267888   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:42.267937   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:42.309519   74039 cri.go:89] found id: ""
	I0914 01:06:42.309547   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.309555   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:42.309561   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:42.309625   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:42.349166   74039 cri.go:89] found id: ""
	I0914 01:06:42.349190   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.349199   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:42.349205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:42.349263   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:42.399150   74039 cri.go:89] found id: ""
	I0914 01:06:42.399179   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.399189   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:42.399197   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:42.399257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:42.460438   74039 cri.go:89] found id: ""
	I0914 01:06:42.460468   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.460477   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:42.460482   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:42.460541   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:42.494641   74039 cri.go:89] found id: ""
	I0914 01:06:42.494670   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.494681   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:42.494687   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:42.494750   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:42.530231   74039 cri.go:89] found id: ""
	I0914 01:06:42.530258   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.530266   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:42.530276   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:42.530341   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:42.564784   74039 cri.go:89] found id: ""
	I0914 01:06:42.564813   74039 logs.go:276] 0 containers: []
	W0914 01:06:42.564822   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:42.564833   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:42.564846   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:42.615087   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:42.615124   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:42.628158   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:42.628186   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:42.697605   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:42.697629   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:42.697645   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:42.774990   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:42.775033   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:43.065246   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.067305   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.194935   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.693939   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:42.394030   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:44.893173   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:46.894092   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:45.313450   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:45.325771   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:45.325849   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:45.360216   74039 cri.go:89] found id: ""
	I0914 01:06:45.360244   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.360254   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:45.360261   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:45.360324   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:45.395943   74039 cri.go:89] found id: ""
	I0914 01:06:45.395967   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.395973   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:45.395980   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:45.396037   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:45.431400   74039 cri.go:89] found id: ""
	I0914 01:06:45.431427   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.431439   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:45.431446   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:45.431504   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:45.466168   74039 cri.go:89] found id: ""
	I0914 01:06:45.466199   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.466209   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:45.466214   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:45.466261   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:45.501066   74039 cri.go:89] found id: ""
	I0914 01:06:45.501097   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.501109   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:45.501116   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:45.501175   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:45.533857   74039 cri.go:89] found id: ""
	I0914 01:06:45.533886   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.533897   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:45.533905   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:45.533964   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:45.568665   74039 cri.go:89] found id: ""
	I0914 01:06:45.568696   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.568709   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:45.568718   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:45.568787   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:45.603113   74039 cri.go:89] found id: ""
	I0914 01:06:45.603144   74039 logs.go:276] 0 containers: []
	W0914 01:06:45.603155   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:45.603167   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:45.603182   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:45.643349   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:45.643377   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:45.696672   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:45.696707   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:45.711191   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:45.711220   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:45.777212   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:45.777244   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:45.777256   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:48.357928   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:48.372440   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:48.372518   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:48.407379   74039 cri.go:89] found id: ""
	I0914 01:06:48.407413   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.407425   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:48.407432   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:48.407494   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:48.441323   74039 cri.go:89] found id: ""
	I0914 01:06:48.441357   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.441369   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:48.441376   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:48.441432   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:48.474791   74039 cri.go:89] found id: ""
	I0914 01:06:48.474824   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.474837   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:48.474844   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:48.474909   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:48.513410   74039 cri.go:89] found id: ""
	I0914 01:06:48.513438   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.513446   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:48.513452   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:48.513501   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:48.548168   74039 cri.go:89] found id: ""
	I0914 01:06:48.548194   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.548202   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:48.548209   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:48.548257   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:48.585085   74039 cri.go:89] found id: ""
	I0914 01:06:48.585110   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.585118   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:48.585124   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:48.585174   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:48.621482   74039 cri.go:89] found id: ""
	I0914 01:06:48.621513   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.621524   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:48.621531   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:48.621603   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:48.657586   74039 cri.go:89] found id: ""
	I0914 01:06:48.657621   74039 logs.go:276] 0 containers: []
	W0914 01:06:48.657632   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:48.657644   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:48.657659   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:48.699454   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:48.699483   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:48.752426   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:48.752467   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:48.767495   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:48.767530   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:48.842148   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:48.842180   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:48.842193   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:47.565932   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.566370   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:48.694430   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:50.704599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:49.393617   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.393779   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:51.430348   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:51.445514   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:51.445582   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:51.482660   74039 cri.go:89] found id: ""
	I0914 01:06:51.482687   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.482699   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:51.482707   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:51.482769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:51.514860   74039 cri.go:89] found id: ""
	I0914 01:06:51.514895   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.514907   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:51.514915   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:51.514975   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:51.551864   74039 cri.go:89] found id: ""
	I0914 01:06:51.551892   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.551902   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:51.551909   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:51.551971   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:51.592603   74039 cri.go:89] found id: ""
	I0914 01:06:51.592632   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.592644   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:51.592654   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:51.592929   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:51.627491   74039 cri.go:89] found id: ""
	I0914 01:06:51.627521   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.627532   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:51.627540   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:51.627606   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:51.664467   74039 cri.go:89] found id: ""
	I0914 01:06:51.664495   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.664506   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:51.664517   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:51.664585   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:51.701524   74039 cri.go:89] found id: ""
	I0914 01:06:51.701547   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.701556   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:51.701563   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:51.701610   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:51.736324   74039 cri.go:89] found id: ""
	I0914 01:06:51.736351   74039 logs.go:276] 0 containers: []
	W0914 01:06:51.736362   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:51.736372   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:51.736385   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:51.811519   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:51.811567   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:51.853363   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:51.853390   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:51.906094   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:51.906130   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:51.919302   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:51.919332   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:51.986458   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:52.065717   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.566343   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.566520   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.194909   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:55.694776   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:53.894007   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:56.393505   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:54.486846   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:54.499530   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:54.499638   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:54.534419   74039 cri.go:89] found id: ""
	I0914 01:06:54.534450   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.534461   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:54.534469   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:54.534533   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:54.571950   74039 cri.go:89] found id: ""
	I0914 01:06:54.571978   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.571986   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:54.571992   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:54.572050   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:54.606656   74039 cri.go:89] found id: ""
	I0914 01:06:54.606687   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.606699   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:54.606706   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:54.606769   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:54.644023   74039 cri.go:89] found id: ""
	I0914 01:06:54.644052   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.644063   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:54.644069   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:54.644127   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:54.681668   74039 cri.go:89] found id: ""
	I0914 01:06:54.681714   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.681722   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:54.681729   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:54.681788   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:54.717538   74039 cri.go:89] found id: ""
	I0914 01:06:54.717567   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.717576   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:54.717582   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:54.717637   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:54.753590   74039 cri.go:89] found id: ""
	I0914 01:06:54.753618   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.753629   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:54.753653   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:54.753716   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:54.785845   74039 cri.go:89] found id: ""
	I0914 01:06:54.785871   74039 logs.go:276] 0 containers: []
	W0914 01:06:54.785880   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:54.785888   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:54.785900   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:54.834715   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:54.834747   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:54.848361   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:54.848402   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:54.920946   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:54.920974   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:54.920992   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:55.004467   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:55.004502   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:57.543169   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:06:57.555652   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:06:57.555730   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:06:57.588880   74039 cri.go:89] found id: ""
	I0914 01:06:57.588917   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.588930   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:06:57.588939   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:06:57.588997   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:06:57.623557   74039 cri.go:89] found id: ""
	I0914 01:06:57.623583   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.623593   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:06:57.623601   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:06:57.623665   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:06:57.662152   74039 cri.go:89] found id: ""
	I0914 01:06:57.662179   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.662187   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:06:57.662193   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:06:57.662248   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:06:57.697001   74039 cri.go:89] found id: ""
	I0914 01:06:57.697026   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.697043   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:06:57.697052   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:06:57.697112   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:06:57.732752   74039 cri.go:89] found id: ""
	I0914 01:06:57.732781   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.732791   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:06:57.732798   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:06:57.732855   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:06:57.769113   74039 cri.go:89] found id: ""
	I0914 01:06:57.769142   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.769151   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:06:57.769157   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:06:57.769215   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:06:57.804701   74039 cri.go:89] found id: ""
	I0914 01:06:57.804733   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.804744   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:06:57.804751   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:06:57.804809   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:06:57.840023   74039 cri.go:89] found id: ""
	I0914 01:06:57.840052   74039 logs.go:276] 0 containers: []
	W0914 01:06:57.840063   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:06:57.840073   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:06:57.840088   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:06:57.893314   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:06:57.893353   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:06:57.908062   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:06:57.908092   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:06:57.982602   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:06:57.982630   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:06:57.982649   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:06:58.063585   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:06:58.063629   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:06:59.066407   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:01.565606   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.194456   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.194556   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:02.194892   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:06:58.893496   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.894090   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:00.605652   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:00.619800   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:00.619871   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:00.656496   74039 cri.go:89] found id: ""
	I0914 01:07:00.656521   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.656530   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:07:00.656536   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:00.656596   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:00.691658   74039 cri.go:89] found id: ""
	I0914 01:07:00.691689   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.691702   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:07:00.691711   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:00.691781   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:00.727814   74039 cri.go:89] found id: ""
	I0914 01:07:00.727846   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.727855   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:07:00.727860   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:00.727913   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:00.762289   74039 cri.go:89] found id: ""
	I0914 01:07:00.762316   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.762326   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:07:00.762333   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:00.762398   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:00.796481   74039 cri.go:89] found id: ""
	I0914 01:07:00.796507   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.796516   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:07:00.796521   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:00.796574   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:00.836328   74039 cri.go:89] found id: ""
	I0914 01:07:00.836360   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.836384   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:07:00.836393   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:00.836465   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:00.872304   74039 cri.go:89] found id: ""
	I0914 01:07:00.872333   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.872341   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:00.872347   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:07:00.872395   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:07:00.909871   74039 cri.go:89] found id: ""
	I0914 01:07:00.909898   74039 logs.go:276] 0 containers: []
	W0914 01:07:00.909906   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:07:00.909916   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:00.909929   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:00.990292   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:07:00.990334   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:01.031201   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:01.031260   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:01.086297   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:01.086337   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:01.100936   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:01.100973   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:07:01.169937   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:07:03.670748   74039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:03.685107   74039 kubeadm.go:597] duration metric: took 4m2.612600892s to restartPrimaryControlPlane
	W0914 01:07:03.685194   74039 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:03.685225   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:04.720278   74039 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035028487s)
	I0914 01:07:04.720359   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:04.734797   74039 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:04.746028   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:04.757914   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:04.757937   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:04.757989   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:04.767466   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:04.767545   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:04.777632   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:04.787339   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:04.787408   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:04.798049   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.808105   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:04.808184   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:04.818112   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:04.827571   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:04.827631   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:04.837244   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:04.913427   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:07:04.913526   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:05.069092   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:05.069238   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:05.069412   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:07:05.263731   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:05.265516   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:05.265624   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:05.265700   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:05.265839   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:05.265936   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:05.266015   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:05.266102   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:05.266201   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:05.266567   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:05.266947   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:05.267285   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:05.267358   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:05.267438   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:05.437052   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:05.565961   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:05.897119   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:06.026378   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:06.041324   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:06.042559   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:06.042648   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:06.201276   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:04.065671   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.066227   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:04.195288   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.694029   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:03.393265   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:05.393691   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:06.202994   74039 out.go:235]   - Booting up control plane ...
	I0914 01:07:06.203120   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:06.207330   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:06.208264   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:06.209066   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:06.211145   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:07:08.565019   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:10.565374   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:08.694599   73629 pod_ready.go:103] pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.688605   73629 pod_ready.go:82] duration metric: took 4m0.000839394s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:09.688630   73629 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-644mh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 01:07:09.688648   73629 pod_ready.go:39] duration metric: took 4m12.548876928s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:09.688672   73629 kubeadm.go:597] duration metric: took 4m20.670013353s to restartPrimaryControlPlane
	W0914 01:07:09.688722   73629 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 01:07:09.688759   73629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:07:07.893671   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:09.893790   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.566007   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:15.065162   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:12.394562   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:14.894088   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:16.894204   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:17.065486   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.065898   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.565070   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:19.393879   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:21.394266   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.565818   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:26.066156   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:23.893908   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:25.894587   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.566588   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.566662   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:28.393270   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:30.394761   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:33.065335   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.066815   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:36.020225   73629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.331443489s)
	I0914 01:07:36.020306   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:36.035778   73629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 01:07:36.046584   73629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:07:36.057665   73629 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:07:36.057701   73629 kubeadm.go:157] found existing configuration files:
	
	I0914 01:07:36.057757   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:07:36.069571   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:07:36.069633   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:07:36.080478   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:07:36.090505   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:07:36.090572   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:07:36.101325   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.111319   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:07:36.111384   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:07:36.121149   73629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:07:36.130306   73629 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:07:36.130382   73629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:07:36.139803   73629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:07:36.186333   73629 kubeadm.go:310] W0914 01:07:36.162139    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.186991   73629 kubeadm.go:310] W0914 01:07:36.162903    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 01:07:36.301086   73629 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:07:32.893602   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:35.393581   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:37.568481   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.067563   74318 pod_ready.go:103] pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:40.565991   74318 pod_ready.go:82] duration metric: took 4m0.00665512s for pod "metrics-server-6867b74b74-4v8px" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:40.566023   74318 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:40.566034   74318 pod_ready.go:39] duration metric: took 4m5.046345561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:40.566052   74318 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:40.566090   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:40.566149   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:40.615149   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:40.615175   74318 cri.go:89] found id: ""
	I0914 01:07:40.615185   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:40.615248   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.619387   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:40.619460   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:40.663089   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:40.663125   74318 cri.go:89] found id: ""
	I0914 01:07:40.663134   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:40.663200   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.667420   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:40.667494   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:40.708057   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:40.708084   74318 cri.go:89] found id: ""
	I0914 01:07:40.708094   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:40.708156   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.712350   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:40.712429   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:40.759340   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:40.759366   74318 cri.go:89] found id: ""
	I0914 01:07:40.759374   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:40.759435   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.763484   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:40.763563   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:40.808401   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.808429   74318 cri.go:89] found id: ""
	I0914 01:07:40.808440   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:40.808505   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.812869   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:40.812944   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:40.857866   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:40.857893   74318 cri.go:89] found id: ""
	I0914 01:07:40.857902   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:40.857957   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.862252   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:40.862339   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:40.900924   74318 cri.go:89] found id: ""
	I0914 01:07:40.900953   74318 logs.go:276] 0 containers: []
	W0914 01:07:40.900964   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:40.900972   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:40.901035   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:40.940645   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:40.940670   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:40.940676   74318 cri.go:89] found id: ""
	I0914 01:07:40.940685   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:40.940741   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.944819   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:40.948642   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:40.948666   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:40.982323   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:40.982353   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:41.028479   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:41.028505   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:41.066640   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:41.066669   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:41.619317   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:41.619361   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:41.634171   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:41.634214   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:41.700170   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:41.700202   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:41.747473   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:41.747514   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:41.785351   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:41.785381   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:41.862442   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:41.862488   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:41.909251   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:41.909288   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:37.393871   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:39.393986   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:41.394429   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.044966   73629 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 01:07:45.045070   73629 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:07:45.045198   73629 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:07:45.045337   73629 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:07:45.045475   73629 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 01:07:45.045558   73629 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:07:45.047226   73629 out.go:235]   - Generating certificates and keys ...
	I0914 01:07:45.047311   73629 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:07:45.047370   73629 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:07:45.047441   73629 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:07:45.047493   73629 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:07:45.047556   73629 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:07:45.047605   73629 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:07:45.047667   73629 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:07:45.047719   73629 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:07:45.047851   73629 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:07:45.047955   73629 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:07:45.048012   73629 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:07:45.048091   73629 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:07:45.048159   73629 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:07:45.048226   73629 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 01:07:45.048276   73629 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:07:45.048332   73629 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:07:45.048378   73629 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:07:45.048453   73629 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:07:45.048537   73629 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:07:45.049937   73629 out.go:235]   - Booting up control plane ...
	I0914 01:07:45.050064   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:07:45.050190   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:07:45.050292   73629 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:07:45.050435   73629 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:07:45.050582   73629 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:07:45.050645   73629 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:07:45.050850   73629 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 01:07:45.050983   73629 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 01:07:45.051079   73629 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002631079s
	I0914 01:07:45.051169   73629 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 01:07:45.051257   73629 kubeadm.go:310] [api-check] The API server is healthy after 5.00351629s
	I0914 01:07:45.051380   73629 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 01:07:45.051571   73629 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 01:07:45.051654   73629 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 01:07:45.051926   73629 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-057857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 01:07:45.052028   73629 kubeadm.go:310] [bootstrap-token] Using token: xh1eul.goxmgrawoq4kftyr
	I0914 01:07:45.053539   73629 out.go:235]   - Configuring RBAC rules ...
	I0914 01:07:45.053661   73629 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 01:07:45.053769   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 01:07:45.053966   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 01:07:45.054145   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 01:07:45.054294   73629 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 01:07:45.054410   73629 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 01:07:45.054542   73629 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 01:07:45.054618   73629 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 01:07:45.054690   73629 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 01:07:45.054710   73629 kubeadm.go:310] 
	I0914 01:07:45.054795   73629 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 01:07:45.054804   73629 kubeadm.go:310] 
	I0914 01:07:45.054920   73629 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 01:07:45.054932   73629 kubeadm.go:310] 
	I0914 01:07:45.054969   73629 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 01:07:45.055052   73629 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 01:07:45.055124   73629 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 01:07:45.055133   73629 kubeadm.go:310] 
	I0914 01:07:45.055239   73629 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 01:07:45.055247   73629 kubeadm.go:310] 
	I0914 01:07:45.055326   73629 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 01:07:45.055345   73629 kubeadm.go:310] 
	I0914 01:07:45.055415   73629 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 01:07:45.055548   73629 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 01:07:45.055651   73629 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 01:07:45.055661   73629 kubeadm.go:310] 
	I0914 01:07:45.055778   73629 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 01:07:45.055901   73629 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 01:07:45.055917   73629 kubeadm.go:310] 
	I0914 01:07:45.056019   73629 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056151   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba \
	I0914 01:07:45.056182   73629 kubeadm.go:310] 	--control-plane 
	I0914 01:07:45.056191   73629 kubeadm.go:310] 
	I0914 01:07:45.056320   73629 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 01:07:45.056333   73629 kubeadm.go:310] 
	I0914 01:07:45.056431   73629 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xh1eul.goxmgrawoq4kftyr \
	I0914 01:07:45.056579   73629 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c9737901f0695d67a5101b4dcc6695f6a9a1a9a93628aa4f54525a484e155bba 
	I0914 01:07:45.056599   73629 cni.go:84] Creating CNI manager for ""
	I0914 01:07:45.056608   73629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 01:07:45.058074   73629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 01:07:41.979657   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:41.979692   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:42.109134   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:42.109168   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.646337   74318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:44.663544   74318 api_server.go:72] duration metric: took 4m16.876006557s to wait for apiserver process to appear ...
	I0914 01:07:44.663575   74318 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:44.663619   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:44.663685   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:44.698143   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:44.698164   74318 cri.go:89] found id: ""
	I0914 01:07:44.698171   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:44.698219   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.702164   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:44.702241   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:44.742183   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:44.742208   74318 cri.go:89] found id: ""
	I0914 01:07:44.742216   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:44.742258   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.746287   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:44.746368   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:44.788193   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:44.788221   74318 cri.go:89] found id: ""
	I0914 01:07:44.788229   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:44.788274   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.792200   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:44.792276   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:44.826655   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:44.826679   74318 cri.go:89] found id: ""
	I0914 01:07:44.826686   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:44.826748   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.830552   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:44.830620   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:44.865501   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:44.865528   74318 cri.go:89] found id: ""
	I0914 01:07:44.865538   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:44.865608   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.869609   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:44.869686   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:44.908621   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:44.908641   74318 cri.go:89] found id: ""
	I0914 01:07:44.908650   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:44.908713   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.913343   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:44.913423   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:44.949066   74318 cri.go:89] found id: ""
	I0914 01:07:44.949093   74318 logs.go:276] 0 containers: []
	W0914 01:07:44.949115   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:44.949122   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:44.949187   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:44.986174   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:44.986199   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:44.986205   74318 cri.go:89] found id: ""
	I0914 01:07:44.986213   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:44.986282   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.991372   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:44.995733   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:44.995759   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:45.039347   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:45.039398   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:45.087967   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:45.087999   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:45.156269   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:45.156321   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:45.198242   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:45.198270   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:45.251464   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:45.251500   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:45.324923   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:45.324960   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:45.338844   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:45.338869   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:45.379489   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:45.379522   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:45.829525   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:45.829573   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:45.885528   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:45.885571   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:46.003061   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:46.003106   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:46.048273   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:46.048309   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:45.059546   73629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 01:07:45.074813   73629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 01:07:45.095946   73629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 01:07:45.096015   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.096039   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-057857 minikube.k8s.io/updated_at=2024_09_14T01_07_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=no-preload-057857 minikube.k8s.io/primary=true
	I0914 01:07:45.130188   73629 ops.go:34] apiserver oom_adj: -16
	I0914 01:07:45.307662   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:45.807800   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.308376   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:46.808472   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:43.893641   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:45.896137   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:46.212547   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:07:46.213229   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:46.213413   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:47.308331   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:47.808732   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.308295   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.807695   73629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 01:07:48.905584   73629 kubeadm.go:1113] duration metric: took 3.809634495s to wait for elevateKubeSystemPrivileges
	I0914 01:07:48.905626   73629 kubeadm.go:394] duration metric: took 4m59.935573002s to StartCluster
	I0914 01:07:48.905648   73629 settings.go:142] acquiring lock: {Name:mkb2c953720dc8c4ec2c5da34d8bee8123ecd7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.905747   73629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 01:07:48.907665   73629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5422/kubeconfig: {Name:mk2c9cc32be4df6a645d5ce5b38ddefbc30cdc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.907997   73629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:48.908034   73629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:48.908127   73629 addons.go:69] Setting storage-provisioner=true in profile "no-preload-057857"
	I0914 01:07:48.908147   73629 addons.go:234] Setting addon storage-provisioner=true in "no-preload-057857"
	W0914 01:07:48.908156   73629 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:07:48.908177   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908215   73629 config.go:182] Loaded profile config "no-preload-057857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:48.908273   73629 addons.go:69] Setting default-storageclass=true in profile "no-preload-057857"
	I0914 01:07:48.908292   73629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-057857"
	I0914 01:07:48.908493   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908522   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.908596   73629 addons.go:69] Setting metrics-server=true in profile "no-preload-057857"
	I0914 01:07:48.908631   73629 addons.go:234] Setting addon metrics-server=true in "no-preload-057857"
	W0914 01:07:48.908652   73629 addons.go:243] addon metrics-server should already be in state true
	I0914 01:07:48.908694   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.908707   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.908954   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.909252   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.909319   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.910019   73629 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:48.911411   73629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:48.926895   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0914 01:07:48.927445   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.928096   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.928124   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.928538   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.928550   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0914 01:07:48.928616   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0914 01:07:48.928710   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.928959   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929059   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.929446   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.929470   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930063   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930132   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.930159   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.930497   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.930755   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.930794   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.931030   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.931073   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.933027   73629 addons.go:234] Setting addon default-storageclass=true in "no-preload-057857"
	W0914 01:07:48.933050   73629 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:07:48.933081   73629 host.go:66] Checking if "no-preload-057857" exists ...
	I0914 01:07:48.933460   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.933512   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.947685   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0914 01:07:48.948179   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.948798   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.948817   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.949188   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.949350   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.949657   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0914 01:07:48.950106   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.951975   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.951994   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.952065   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.952533   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0914 01:07:48.952710   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.953081   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.953111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.953576   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.953595   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.953874   73629 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:07:48.954053   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.954722   73629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 01:07:48.954761   73629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 01:07:48.954971   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:07:48.954974   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.954983   73629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:07:48.954999   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.956513   73629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:07:48.957624   73629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:48.957642   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:07:48.957660   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.959735   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960824   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.960874   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.960881   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.960890   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961065   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961359   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.961389   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.961418   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.961663   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.961670   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.961811   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.961944   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.962117   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:48.973063   73629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0914 01:07:48.973569   73629 main.go:141] libmachine: () Calling .GetVersion
	I0914 01:07:48.974142   73629 main.go:141] libmachine: Using API Version  1
	I0914 01:07:48.974168   73629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 01:07:48.974481   73629 main.go:141] libmachine: () Calling .GetMachineName
	I0914 01:07:48.974685   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetState
	I0914 01:07:48.976665   73629 main.go:141] libmachine: (no-preload-057857) Calling .DriverName
	I0914 01:07:48.980063   73629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:48.980089   73629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:07:48.980111   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHHostname
	I0914 01:07:48.983565   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984012   73629 main.go:141] libmachine: (no-preload-057857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:57:32", ip: ""} in network mk-no-preload-057857: {Iface:virbr3 ExpiryTime:2024-09-14 02:02:25 +0000 UTC Type:0 Mac:52:54:00:12:57:32 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:no-preload-057857 Clientid:01:52:54:00:12:57:32}
	I0914 01:07:48.984082   73629 main.go:141] libmachine: (no-preload-057857) DBG | domain no-preload-057857 has defined IP address 192.168.39.129 and MAC address 52:54:00:12:57:32 in network mk-no-preload-057857
	I0914 01:07:48.984338   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHPort
	I0914 01:07:48.984520   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHKeyPath
	I0914 01:07:48.984680   73629 main.go:141] libmachine: (no-preload-057857) Calling .GetSSHUsername
	I0914 01:07:48.984819   73629 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/no-preload-057857/id_rsa Username:docker}
	I0914 01:07:49.174367   73629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.202858   73629 node_ready.go:35] waiting up to 6m0s for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237312   73629 node_ready.go:49] node "no-preload-057857" has status "Ready":"True"
	I0914 01:07:49.237344   73629 node_ready.go:38] duration metric: took 34.448967ms for node "no-preload-057857" to be "Ready" ...
	I0914 01:07:49.237357   73629 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:49.245680   73629 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:49.301954   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:07:49.301983   73629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:07:49.314353   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:07:49.330720   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:07:49.357870   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:07:49.357897   73629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:07:49.444063   73629 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:49.444091   73629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:07:49.556752   73629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:07:50.122680   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122710   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.122762   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.122787   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123028   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123043   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123069   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.123083   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123094   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123102   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123031   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123132   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.123142   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.123149   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.123354   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.123368   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.124736   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.124755   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.124769   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.151725   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.151752   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.152129   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.152137   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.152164   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.503868   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.503890   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504175   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504193   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504205   73629 main.go:141] libmachine: Making call to close driver server
	I0914 01:07:50.504213   73629 main.go:141] libmachine: (no-preload-057857) Calling .Close
	I0914 01:07:50.504246   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504470   73629 main.go:141] libmachine: Successfully made call to close driver server
	I0914 01:07:50.504487   73629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 01:07:50.504501   73629 main.go:141] libmachine: (no-preload-057857) DBG | Closing plugin on server side
	I0914 01:07:50.504509   73629 addons.go:475] Verifying addon metrics-server=true in "no-preload-057857"
	I0914 01:07:50.506437   73629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 01:07:48.591238   74318 api_server.go:253] Checking apiserver healthz at https://192.168.50.105:8443/healthz ...
	I0914 01:07:48.596777   74318 api_server.go:279] https://192.168.50.105:8443/healthz returned 200:
	ok
	I0914 01:07:48.597862   74318 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:48.597886   74318 api_server.go:131] duration metric: took 3.934303095s to wait for apiserver health ...
	I0914 01:07:48.597895   74318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:48.597920   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:48.597977   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:48.648318   74318 cri.go:89] found id: "dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:48.648339   74318 cri.go:89] found id: ""
	I0914 01:07:48.648347   74318 logs.go:276] 1 containers: [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9]
	I0914 01:07:48.648399   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.652903   74318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:48.652983   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:48.693067   74318 cri.go:89] found id: "80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:48.693093   74318 cri.go:89] found id: ""
	I0914 01:07:48.693102   74318 logs.go:276] 1 containers: [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a]
	I0914 01:07:48.693161   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.697395   74318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:48.697449   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:48.733368   74318 cri.go:89] found id: "107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:48.733393   74318 cri.go:89] found id: ""
	I0914 01:07:48.733403   74318 logs.go:276] 1 containers: [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db]
	I0914 01:07:48.733459   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.737236   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:48.737307   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:48.773293   74318 cri.go:89] found id: "9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:48.773318   74318 cri.go:89] found id: ""
	I0914 01:07:48.773326   74318 logs.go:276] 1 containers: [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637]
	I0914 01:07:48.773384   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.777825   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:48.777899   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:48.816901   74318 cri.go:89] found id: "f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:48.816933   74318 cri.go:89] found id: ""
	I0914 01:07:48.816943   74318 logs.go:276] 1 containers: [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98]
	I0914 01:07:48.817012   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.821326   74318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:48.821403   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:48.854443   74318 cri.go:89] found id: "5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:48.854474   74318 cri.go:89] found id: ""
	I0914 01:07:48.854484   74318 logs.go:276] 1 containers: [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b]
	I0914 01:07:48.854543   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.858367   74318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:48.858441   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:48.899670   74318 cri.go:89] found id: ""
	I0914 01:07:48.899697   74318 logs.go:276] 0 containers: []
	W0914 01:07:48.899707   74318 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:48.899714   74318 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:48.899778   74318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:48.952557   74318 cri.go:89] found id: "17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:48.952573   74318 cri.go:89] found id: "b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:48.952577   74318 cri.go:89] found id: ""
	I0914 01:07:48.952585   74318 logs.go:276] 2 containers: [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c]
	I0914 01:07:48.952632   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.964470   74318 ssh_runner.go:195] Run: which crictl
	I0914 01:07:48.968635   74318 logs.go:123] Gathering logs for kube-scheduler [9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637] ...
	I0914 01:07:48.968663   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bdf5d4a96c47a7c22d33934f27ca2aa08662e864ed64e772c583288afca5637"
	I0914 01:07:49.010193   74318 logs.go:123] Gathering logs for kube-proxy [f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98] ...
	I0914 01:07:49.010237   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cf7d5e340de8458195ef7f4ad1c645267e71babfa309366dc9ec771cc50f98"
	I0914 01:07:49.050563   74318 logs.go:123] Gathering logs for kube-controller-manager [5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b] ...
	I0914 01:07:49.050597   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd32fdb3cf8f256b8931c9b70e8088c0875bbfc04343b596d55e1174775033b"
	I0914 01:07:49.109947   74318 logs.go:123] Gathering logs for container status ...
	I0914 01:07:49.109996   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:07:49.165616   74318 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:07:49.165662   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:07:49.287360   74318 logs.go:123] Gathering logs for kube-apiserver [dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9] ...
	I0914 01:07:49.287405   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe67fa7604039cacedcf93e3537a4a0fae7a27e1056a354b475a9348bb2c3a9"
	I0914 01:07:49.334352   74318 logs.go:123] Gathering logs for etcd [80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a] ...
	I0914 01:07:49.334377   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80a81c3710a32a6870ea2e4862c159062552cc8211332639a85c85e2ddbe855a"
	I0914 01:07:49.384242   74318 logs.go:123] Gathering logs for storage-provisioner [17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3] ...
	I0914 01:07:49.384298   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df87a7f9d1cbeface4fdc966e6857c34f8eaf4e28b998e39a73812ccec0ce3"
	I0914 01:07:49.430352   74318 logs.go:123] Gathering logs for storage-provisioner [b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c] ...
	I0914 01:07:49.430394   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b065365cf5210ee85bc5adf7870a69c8c769f259d1337c13c485a0ae3948c39c"
	I0914 01:07:49.471052   74318 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:49.471079   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:07:49.841700   74318 logs.go:123] Gathering logs for kubelet ...
	I0914 01:07:49.841740   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:07:49.924441   74318 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:49.924492   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:49.944140   74318 logs.go:123] Gathering logs for coredns [107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db] ...
	I0914 01:07:49.944183   74318 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 107cc9128ebff5e069acae235ef5dfeeb35a327b9d85c10e550a05591bcb06db"
	I0914 01:07:50.507454   73629 addons.go:510] duration metric: took 1.599422238s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 01:07:51.252950   73629 pod_ready.go:103] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:48.393557   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:50.394454   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:52.497266   74318 system_pods.go:59] 8 kube-system pods found
	I0914 01:07:52.497307   74318 system_pods.go:61] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.497313   74318 system_pods.go:61] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.497318   74318 system_pods.go:61] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.497324   74318 system_pods.go:61] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.497328   74318 system_pods.go:61] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.497334   74318 system_pods.go:61] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.497344   74318 system_pods.go:61] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.497353   74318 system_pods.go:61] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.497366   74318 system_pods.go:74] duration metric: took 3.899464014s to wait for pod list to return data ...
	I0914 01:07:52.497382   74318 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:52.499924   74318 default_sa.go:45] found service account: "default"
	I0914 01:07:52.499946   74318 default_sa.go:55] duration metric: took 2.558404ms for default service account to be created ...
	I0914 01:07:52.499954   74318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:52.504770   74318 system_pods.go:86] 8 kube-system pods found
	I0914 01:07:52.504795   74318 system_pods.go:89] "coredns-7c65d6cfc9-ssskq" [74eab481-dc57-4dd2-a673-33e7d853cee4] Running
	I0914 01:07:52.504800   74318 system_pods.go:89] "etcd-embed-certs-880490" [77dab14e-9628-4e2a-a081-2705d722e93a] Running
	I0914 01:07:52.504804   74318 system_pods.go:89] "kube-apiserver-embed-certs-880490" [5dc9b36c-e258-4f7e-ac3f-910954fef10d] Running
	I0914 01:07:52.504809   74318 system_pods.go:89] "kube-controller-manager-embed-certs-880490" [78453221-74bf-4f64-b459-1ecb1767fb2d] Running
	I0914 01:07:52.504812   74318 system_pods.go:89] "kube-proxy-566n8" [e6fbcc6d-aa8a-4d4a-ab64-929170c01a4a] Running
	I0914 01:07:52.504816   74318 system_pods.go:89] "kube-scheduler-embed-certs-880490" [f3dbdb58-d844-4927-ae74-621d5f0883f0] Running
	I0914 01:07:52.504822   74318 system_pods.go:89] "metrics-server-6867b74b74-4v8px" [e291b7c4-a9b2-4715-9d78-926618e87877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:52.504826   74318 system_pods.go:89] "storage-provisioner" [7d1d7c67-c4e8-4520-8385-8ea8668177e9] Running
	I0914 01:07:52.504833   74318 system_pods.go:126] duration metric: took 4.874526ms to wait for k8s-apps to be running ...
	I0914 01:07:52.504841   74318 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:52.504908   74318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:52.521594   74318 system_svc.go:56] duration metric: took 16.742919ms WaitForService to wait for kubelet
	I0914 01:07:52.521631   74318 kubeadm.go:582] duration metric: took 4m24.734100172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:52.521656   74318 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:52.524928   74318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:52.524950   74318 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:52.524960   74318 node_conditions.go:105] duration metric: took 3.299528ms to run NodePressure ...
	I0914 01:07:52.524972   74318 start.go:241] waiting for startup goroutines ...
	I0914 01:07:52.524978   74318 start.go:246] waiting for cluster config update ...
	I0914 01:07:52.524990   74318 start.go:255] writing updated cluster config ...
	I0914 01:07:52.525245   74318 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:52.575860   74318 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:52.577583   74318 out.go:177] * Done! kubectl is now configured to use "embed-certs-880490" cluster and "default" namespace by default
	I0914 01:07:51.214087   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:07:51.214374   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:07:52.752407   73629 pod_ready.go:93] pod "etcd-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.752429   73629 pod_ready.go:82] duration metric: took 3.506723517s for pod "etcd-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.752438   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756908   73629 pod_ready.go:93] pod "kube-apiserver-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:52.756931   73629 pod_ready.go:82] duration metric: took 4.487049ms for pod "kube-apiserver-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:52.756940   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:54.764049   73629 pod_ready.go:103] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.763966   73629 pod_ready.go:93] pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.763997   73629 pod_ready.go:82] duration metric: took 4.007049286s for pod "kube-controller-manager-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.764009   73629 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769956   73629 pod_ready.go:93] pod "kube-scheduler-no-preload-057857" in "kube-system" namespace has status "Ready":"True"
	I0914 01:07:56.769983   73629 pod_ready.go:82] duration metric: took 5.966294ms for pod "kube-scheduler-no-preload-057857" in "kube-system" namespace to be "Ready" ...
	I0914 01:07:56.769994   73629 pod_ready.go:39] duration metric: took 7.532623561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:56.770010   73629 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:56.770074   73629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:07:56.785845   73629 api_server.go:72] duration metric: took 7.877811681s to wait for apiserver process to appear ...
	I0914 01:07:56.785878   73629 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:07:56.785900   73629 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0914 01:07:56.791394   73629 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0914 01:07:56.792552   73629 api_server.go:141] control plane version: v1.31.1
	I0914 01:07:56.792573   73629 api_server.go:131] duration metric: took 6.689365ms to wait for apiserver health ...
	I0914 01:07:56.792581   73629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:07:56.798227   73629 system_pods.go:59] 9 kube-system pods found
	I0914 01:07:56.798252   73629 system_pods.go:61] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.798260   73629 system_pods.go:61] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.798264   73629 system_pods.go:61] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.798270   73629 system_pods.go:61] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.798273   73629 system_pods.go:61] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.798277   73629 system_pods.go:61] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.798282   73629 system_pods.go:61] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.798288   73629 system_pods.go:61] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.798292   73629 system_pods.go:61] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.798299   73629 system_pods.go:74] duration metric: took 5.712618ms to wait for pod list to return data ...
	I0914 01:07:56.798310   73629 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:07:56.801512   73629 default_sa.go:45] found service account: "default"
	I0914 01:07:56.801533   73629 default_sa.go:55] duration metric: took 3.215883ms for default service account to be created ...
	I0914 01:07:56.801540   73629 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:07:56.806584   73629 system_pods.go:86] 9 kube-system pods found
	I0914 01:07:56.806613   73629 system_pods.go:89] "coredns-7c65d6cfc9-52vdb" [c6d8bc35-9a11-4903-a681-767cf3584d68] Running
	I0914 01:07:56.806621   73629 system_pods.go:89] "coredns-7c65d6cfc9-jqk6k" [bef11f33-25b0-4b58-bbea-4cd43f02955c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:07:56.806628   73629 system_pods.go:89] "etcd-no-preload-057857" [00fb690f-6492-4b9e-b2a2-8408786caef4] Running
	I0914 01:07:56.806634   73629 system_pods.go:89] "kube-apiserver-no-preload-057857" [a09f93a8-4bbc-4897-8175-a23e83065b8f] Running
	I0914 01:07:56.806638   73629 system_pods.go:89] "kube-controller-manager-no-preload-057857" [daa05775-1500-40d4-b3a1-c8809a847cbb] Running
	I0914 01:07:56.806643   73629 system_pods.go:89] "kube-proxy-m6d75" [e8d2b77d-820d-4a2e-ab4e-83909c0e1382] Running
	I0914 01:07:56.806648   73629 system_pods.go:89] "kube-scheduler-no-preload-057857" [47667b2a-5c3b-4b1b-908f-6c248f105319] Running
	I0914 01:07:56.806652   73629 system_pods.go:89] "metrics-server-6867b74b74-d78nt" [5f77cfda-f8e2-4b08-8050-473c500f7504] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:07:56.806657   73629 system_pods.go:89] "storage-provisioner" [05866937-f16f-4aea-bf2d-3e6d644a5fa7] Running
	I0914 01:07:56.806664   73629 system_pods.go:126] duration metric: took 5.119006ms to wait for k8s-apps to be running ...
	I0914 01:07:56.806671   73629 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:07:56.806718   73629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:07:56.821816   73629 system_svc.go:56] duration metric: took 15.133756ms WaitForService to wait for kubelet
	I0914 01:07:56.821870   73629 kubeadm.go:582] duration metric: took 7.913839247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:56.821886   73629 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:07:56.824762   73629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:07:56.824783   73629 node_conditions.go:123] node cpu capacity is 2
	I0914 01:07:56.824792   73629 node_conditions.go:105] duration metric: took 2.901317ms to run NodePressure ...
	I0914 01:07:56.824802   73629 start.go:241] waiting for startup goroutines ...
	I0914 01:07:56.824808   73629 start.go:246] waiting for cluster config update ...
	I0914 01:07:56.824818   73629 start.go:255] writing updated cluster config ...
	I0914 01:07:56.825097   73629 ssh_runner.go:195] Run: rm -f paused
	I0914 01:07:56.874222   73629 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:07:56.876124   73629 out.go:177] * Done! kubectl is now configured to use "no-preload-057857" cluster and "default" namespace by default
	I0914 01:07:52.893085   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:54.894526   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:56.895689   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392415   73455 pod_ready.go:103] pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace has status "Ready":"False"
	I0914 01:07:59.392444   73455 pod_ready.go:82] duration metric: took 4m0.005475682s for pod "metrics-server-6867b74b74-lxzvw" in "kube-system" namespace to be "Ready" ...
	E0914 01:07:59.392453   73455 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:07:59.392461   73455 pod_ready.go:39] duration metric: took 4m6.046976745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:07:59.392475   73455 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:07:59.392499   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:07:59.392548   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:07:59.439210   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:07:59.439234   73455 cri.go:89] found id: ""
	I0914 01:07:59.439242   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:07:59.439292   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.443572   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:07:59.443625   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:07:59.481655   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.481681   73455 cri.go:89] found id: ""
	I0914 01:07:59.481690   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:07:59.481748   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.485714   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:07:59.485787   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:07:59.526615   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:07:59.526646   73455 cri.go:89] found id: ""
	I0914 01:07:59.526656   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:07:59.526713   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.530806   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:07:59.530880   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:07:59.567641   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.567665   73455 cri.go:89] found id: ""
	I0914 01:07:59.567672   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:07:59.567731   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.571549   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:07:59.571627   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:07:59.607739   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:07:59.607778   73455 cri.go:89] found id: ""
	I0914 01:07:59.607808   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:07:59.607866   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.612763   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:07:59.612843   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:07:59.648724   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:07:59.648754   73455 cri.go:89] found id: ""
	I0914 01:07:59.648763   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:07:59.648818   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.652808   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:07:59.652874   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:07:59.691946   73455 cri.go:89] found id: ""
	I0914 01:07:59.691978   73455 logs.go:276] 0 containers: []
	W0914 01:07:59.691989   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:07:59.691998   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:07:59.692068   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:07:59.726855   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:07:59.726892   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:07:59.726900   73455 cri.go:89] found id: ""
	I0914 01:07:59.726913   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:07:59.726984   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.731160   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:07:59.735516   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:07:59.735539   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:07:59.749083   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:07:59.749116   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:07:59.788892   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:07:59.788925   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:07:59.824600   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:07:59.824628   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:00.324320   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:00.324359   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:00.391471   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:00.391507   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:00.525571   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:00.525604   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:00.567913   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:00.567946   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:00.601931   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:00.601956   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:00.641125   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:00.641155   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:00.703182   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:00.703214   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:00.743740   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:00.743770   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:00.785452   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:00.785486   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:01.214860   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:01.215083   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:08:03.334782   73455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:03.350570   73455 api_server.go:72] duration metric: took 4m17.725003464s to wait for apiserver process to appear ...
	I0914 01:08:03.350600   73455 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:03.350644   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:03.350703   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:03.391604   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.391633   73455 cri.go:89] found id: ""
	I0914 01:08:03.391641   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:03.391699   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.395820   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:03.395914   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:03.434411   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.434437   73455 cri.go:89] found id: ""
	I0914 01:08:03.434447   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:03.434502   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.439428   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:03.439497   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:03.475109   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.475131   73455 cri.go:89] found id: ""
	I0914 01:08:03.475138   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:03.475181   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.479024   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:03.479095   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:03.512771   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:03.512797   73455 cri.go:89] found id: ""
	I0914 01:08:03.512806   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:03.512865   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.522361   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:03.522431   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:03.557562   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:03.557581   73455 cri.go:89] found id: ""
	I0914 01:08:03.557588   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:03.557634   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.561948   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:03.562015   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:03.595010   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.595036   73455 cri.go:89] found id: ""
	I0914 01:08:03.595044   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:03.595089   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.598954   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:03.599013   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:03.632860   73455 cri.go:89] found id: ""
	I0914 01:08:03.632894   73455 logs.go:276] 0 containers: []
	W0914 01:08:03.632906   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:03.632914   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:03.633000   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:03.671289   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.671312   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:03.671318   73455 cri.go:89] found id: ""
	I0914 01:08:03.671332   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:03.671394   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.675406   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:03.678954   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:03.678981   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:03.721959   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:03.721991   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:03.760813   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:03.760852   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:03.804936   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:03.804965   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:03.856709   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:03.856746   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:03.898190   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:03.898218   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:04.344784   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:04.344824   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:04.418307   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:04.418347   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:04.432514   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:04.432547   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:04.536823   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:04.536858   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:04.571623   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:04.571656   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:04.606853   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:04.606887   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:04.642144   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:04.642177   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:07.191257   73455 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8444/healthz ...
	I0914 01:08:07.195722   73455 api_server.go:279] https://192.168.72.203:8444/healthz returned 200:
	ok
	I0914 01:08:07.196766   73455 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:07.196786   73455 api_server.go:131] duration metric: took 3.846179621s to wait for apiserver health ...
	I0914 01:08:07.196794   73455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:07.196815   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:07.196859   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:07.233355   73455 cri.go:89] found id: "38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.233382   73455 cri.go:89] found id: ""
	I0914 01:08:07.233392   73455 logs.go:276] 1 containers: [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295]
	I0914 01:08:07.233452   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.237370   73455 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:07.237430   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:07.270762   73455 cri.go:89] found id: "6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.270788   73455 cri.go:89] found id: ""
	I0914 01:08:07.270798   73455 logs.go:276] 1 containers: [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a]
	I0914 01:08:07.270846   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.274717   73455 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:07.274780   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:07.309969   73455 cri.go:89] found id: "eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.309995   73455 cri.go:89] found id: ""
	I0914 01:08:07.310005   73455 logs.go:276] 1 containers: [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84]
	I0914 01:08:07.310061   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.314971   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:07.315046   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:07.349147   73455 cri.go:89] found id: "e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:07.349172   73455 cri.go:89] found id: ""
	I0914 01:08:07.349180   73455 logs.go:276] 1 containers: [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d]
	I0914 01:08:07.349242   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.353100   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:07.353167   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:07.389514   73455 cri.go:89] found id: "a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.389547   73455 cri.go:89] found id: ""
	I0914 01:08:07.389557   73455 logs.go:276] 1 containers: [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1]
	I0914 01:08:07.389612   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.393717   73455 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:07.393775   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:07.433310   73455 cri.go:89] found id: "b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:07.433335   73455 cri.go:89] found id: ""
	I0914 01:08:07.433342   73455 logs.go:276] 1 containers: [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06]
	I0914 01:08:07.433401   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.437067   73455 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:07.437126   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:07.472691   73455 cri.go:89] found id: ""
	I0914 01:08:07.472725   73455 logs.go:276] 0 containers: []
	W0914 01:08:07.472736   73455 logs.go:278] No container was found matching "kindnet"
	I0914 01:08:07.472744   73455 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:08:07.472792   73455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:08:07.515902   73455 cri.go:89] found id: "bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.515924   73455 cri.go:89] found id: "6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:07.515927   73455 cri.go:89] found id: ""
	I0914 01:08:07.515934   73455 logs.go:276] 2 containers: [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c]
	I0914 01:08:07.515978   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.522996   73455 ssh_runner.go:195] Run: which crictl
	I0914 01:08:07.526453   73455 logs.go:123] Gathering logs for kube-apiserver [38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295] ...
	I0914 01:08:07.526477   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c2a1c006d774a4ce71b2f075328d371938d4e29974e3465b53776371ae4295"
	I0914 01:08:07.575354   73455 logs.go:123] Gathering logs for etcd [6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a] ...
	I0914 01:08:07.575386   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6234a7bcd6d951babe97b5624b5b5eca1f4b743e450f1ca11be8f9ab8cae6e4a"
	I0914 01:08:07.623091   73455 logs.go:123] Gathering logs for coredns [eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84] ...
	I0914 01:08:07.623125   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed5d3016c5146d8a65416ba8ef64478222b78797aff3f372ff4d0700fe59f84"
	I0914 01:08:07.662085   73455 logs.go:123] Gathering logs for kube-proxy [a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1] ...
	I0914 01:08:07.662114   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a208a2f3609d0dca841cd9f2acce8248a91d3e84c4419253c0e30c3dec6cb7f1"
	I0914 01:08:07.702785   73455 logs.go:123] Gathering logs for storage-provisioner [bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33] ...
	I0914 01:08:07.702809   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd70c0b225453f21b69847021df456c0c4adc4e645e43998d389ae20d3a2fa33"
	I0914 01:08:07.736311   73455 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:07.736337   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:07.808142   73455 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:07.808181   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:07.823769   73455 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:07.823816   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:07.927633   73455 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:07.927664   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:08.335284   73455 logs.go:123] Gathering logs for container status ...
	I0914 01:08:08.335334   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:08.382511   73455 logs.go:123] Gathering logs for kube-scheduler [e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d] ...
	I0914 01:08:08.382536   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e409487833e23d4825561860ac5cfff6bea0ee2ee4dfcf140d1f3455cbc1ee1d"
	I0914 01:08:08.421343   73455 logs.go:123] Gathering logs for kube-controller-manager [b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06] ...
	I0914 01:08:08.421376   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b88f0f70ed0bda5520ed6af44edd0b098d34d4ca8563d2b2357be8c95a05ae06"
	I0914 01:08:08.471561   73455 logs.go:123] Gathering logs for storage-provisioner [6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c] ...
	I0914 01:08:08.471594   73455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6342974eea142e1ac2ad77fd48db133be8e4d1d68745907cefd9c2afe5b8712c"
	I0914 01:08:11.014330   73455 system_pods.go:59] 8 kube-system pods found
	I0914 01:08:11.014360   73455 system_pods.go:61] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.014365   73455 system_pods.go:61] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.014370   73455 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.014377   73455 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.014380   73455 system_pods.go:61] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.014383   73455 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.014390   73455 system_pods.go:61] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.014396   73455 system_pods.go:61] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.014406   73455 system_pods.go:74] duration metric: took 3.817605732s to wait for pod list to return data ...
	I0914 01:08:11.014414   73455 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:11.017191   73455 default_sa.go:45] found service account: "default"
	I0914 01:08:11.017215   73455 default_sa.go:55] duration metric: took 2.793895ms for default service account to be created ...
	I0914 01:08:11.017225   73455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:11.022717   73455 system_pods.go:86] 8 kube-system pods found
	I0914 01:08:11.022748   73455 system_pods.go:89] "coredns-7c65d6cfc9-5lgsh" [118f49f1-166e-49bf-9309-f74e9f0cf99a] Running
	I0914 01:08:11.022756   73455 system_pods.go:89] "etcd-default-k8s-diff-port-754332" [f66a55ea-35f4-405d-88b3-3848d36dd247] Running
	I0914 01:08:11.022762   73455 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-754332" [e56c3dcd-5669-4fb0-9d1d-b9607df6c1ef] Running
	I0914 01:08:11.022768   73455 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-754332" [9a6da055-7aea-4c48-b063-f8b91aa67338] Running
	I0914 01:08:11.022774   73455 system_pods.go:89] "kube-proxy-f9qhk" [9b57a730-41c0-448b-b566-16581db6996c] Running
	I0914 01:08:11.022779   73455 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-754332" [9dc7d538-9397-4059-972f-728673a27cf8] Running
	I0914 01:08:11.022787   73455 system_pods.go:89] "metrics-server-6867b74b74-lxzvw" [cc0df995-8084-4f3e-92b2-0268d571ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 01:08:11.022793   73455 system_pods.go:89] "storage-provisioner" [43e85d21-ed6c-4c14-9528-6f9986aa1d9b] Running
	I0914 01:08:11.022804   73455 system_pods.go:126] duration metric: took 5.572052ms to wait for k8s-apps to be running ...
	I0914 01:08:11.022817   73455 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:11.022869   73455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:11.040624   73455 system_svc.go:56] duration metric: took 17.799931ms WaitForService to wait for kubelet
	I0914 01:08:11.040651   73455 kubeadm.go:582] duration metric: took 4m25.415100688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:11.040670   73455 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:11.044627   73455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 01:08:11.044653   73455 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:11.044665   73455 node_conditions.go:105] duration metric: took 3.989436ms to run NodePressure ...
	I0914 01:08:11.044678   73455 start.go:241] waiting for startup goroutines ...
	I0914 01:08:11.044687   73455 start.go:246] waiting for cluster config update ...
	I0914 01:08:11.044699   73455 start.go:255] writing updated cluster config ...
	I0914 01:08:11.045009   73455 ssh_runner.go:195] Run: rm -f paused
	I0914 01:08:11.093504   73455 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:08:11.095375   73455 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-754332" cluster and "default" namespace by default
	I0914 01:08:21.215947   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:08:21.216216   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218158   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:01.218412   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:01.218434   74039 kubeadm.go:310] 
	I0914 01:09:01.218501   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:09:01.218568   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:09:01.218600   74039 kubeadm.go:310] 
	I0914 01:09:01.218643   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:09:01.218700   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:09:01.218842   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:09:01.218859   74039 kubeadm.go:310] 
	I0914 01:09:01.219003   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:09:01.219044   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:09:01.219077   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:09:01.219083   74039 kubeadm.go:310] 
	I0914 01:09:01.219174   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:09:01.219275   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:09:01.219290   74039 kubeadm.go:310] 
	I0914 01:09:01.219412   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:09:01.219489   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:09:01.219595   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:09:01.219665   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:09:01.219675   74039 kubeadm.go:310] 
	I0914 01:09:01.220563   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:09:01.220663   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:09:01.220761   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 01:09:01.220954   74039 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 01:09:01.221001   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 01:09:01.677817   74039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:01.693521   74039 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 01:09:01.703623   74039 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 01:09:01.703643   74039 kubeadm.go:157] found existing configuration files:
	
	I0914 01:09:01.703696   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 01:09:01.713008   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 01:09:01.713077   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 01:09:01.722763   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 01:09:01.731595   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 01:09:01.731647   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 01:09:01.740557   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.749178   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 01:09:01.749243   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 01:09:01.758047   74039 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 01:09:01.767352   74039 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 01:09:01.767409   74039 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 01:09:01.776920   74039 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 01:09:01.848518   74039 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 01:09:01.848586   74039 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 01:09:01.987490   74039 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 01:09:01.987647   74039 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 01:09:01.987768   74039 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 01:09:02.153976   74039 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 01:09:02.155834   74039 out.go:235]   - Generating certificates and keys ...
	I0914 01:09:02.155944   74039 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 01:09:02.156042   74039 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 01:09:02.156170   74039 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 01:09:02.156255   74039 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 01:09:02.156366   74039 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 01:09:02.156452   74039 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 01:09:02.156746   74039 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 01:09:02.157184   74039 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 01:09:02.157577   74039 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 01:09:02.158072   74039 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 01:09:02.158143   74039 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 01:09:02.158195   74039 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 01:09:02.350707   74039 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 01:09:02.805918   74039 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 01:09:02.978026   74039 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 01:09:03.139524   74039 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 01:09:03.165744   74039 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 01:09:03.165893   74039 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 01:09:03.165978   74039 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 01:09:03.315240   74039 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 01:09:03.317262   74039 out.go:235]   - Booting up control plane ...
	I0914 01:09:03.317417   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 01:09:03.323017   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 01:09:03.324004   74039 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 01:09:03.324732   74039 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 01:09:03.326770   74039 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 01:09:43.329004   74039 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 01:09:43.329346   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:43.329583   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:48.330117   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:48.330361   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:09:58.331133   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:09:58.331415   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:18.331949   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:18.332232   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331302   74039 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 01:10:58.331626   74039 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 01:10:58.331642   74039 kubeadm.go:310] 
	I0914 01:10:58.331698   74039 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 01:10:58.331755   74039 kubeadm.go:310] 		timed out waiting for the condition
	I0914 01:10:58.331770   74039 kubeadm.go:310] 
	I0914 01:10:58.331833   74039 kubeadm.go:310] 	This error is likely caused by:
	I0914 01:10:58.331883   74039 kubeadm.go:310] 		- The kubelet is not running
	I0914 01:10:58.332024   74039 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 01:10:58.332034   74039 kubeadm.go:310] 
	I0914 01:10:58.332175   74039 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 01:10:58.332250   74039 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 01:10:58.332315   74039 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 01:10:58.332327   74039 kubeadm.go:310] 
	I0914 01:10:58.332481   74039 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 01:10:58.332597   74039 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 01:10:58.332613   74039 kubeadm.go:310] 
	I0914 01:10:58.332774   74039 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 01:10:58.332891   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 01:10:58.332996   74039 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 01:10:58.333111   74039 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 01:10:58.333135   74039 kubeadm.go:310] 
	I0914 01:10:58.333381   74039 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 01:10:58.333513   74039 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 01:10:58.333604   74039 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 01:10:58.333685   74039 kubeadm.go:394] duration metric: took 7m57.320140359s to StartCluster
	I0914 01:10:58.333736   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:10:58.333800   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:10:58.381076   74039 cri.go:89] found id: ""
	I0914 01:10:58.381103   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.381111   74039 logs.go:278] No container was found matching "kube-apiserver"
	I0914 01:10:58.381121   74039 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:10:58.381183   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:10:58.418461   74039 cri.go:89] found id: ""
	I0914 01:10:58.418490   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.418502   74039 logs.go:278] No container was found matching "etcd"
	I0914 01:10:58.418511   74039 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:10:58.418617   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:10:58.451993   74039 cri.go:89] found id: ""
	I0914 01:10:58.452020   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.452032   74039 logs.go:278] No container was found matching "coredns"
	I0914 01:10:58.452048   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:10:58.452101   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:10:58.485163   74039 cri.go:89] found id: ""
	I0914 01:10:58.485191   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.485199   74039 logs.go:278] No container was found matching "kube-scheduler"
	I0914 01:10:58.485205   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:10:58.485254   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:10:58.517197   74039 cri.go:89] found id: ""
	I0914 01:10:58.517222   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.517229   74039 logs.go:278] No container was found matching "kube-proxy"
	I0914 01:10:58.517234   74039 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:10:58.517282   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:10:58.554922   74039 cri.go:89] found id: ""
	I0914 01:10:58.554944   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.554952   74039 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 01:10:58.554957   74039 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:10:58.555003   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:10:58.588485   74039 cri.go:89] found id: ""
	I0914 01:10:58.588509   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.588517   74039 logs.go:278] No container was found matching "kindnet"
	I0914 01:10:58.588522   74039 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:10:58.588588   74039 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:10:58.624944   74039 cri.go:89] found id: ""
	I0914 01:10:58.624978   74039 logs.go:276] 0 containers: []
	W0914 01:10:58.624989   74039 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 01:10:58.625001   74039 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:10:58.625017   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 01:10:58.701892   74039 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 01:10:58.701919   74039 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:10:58.701931   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:10:58.803824   74039 logs.go:123] Gathering logs for container status ...
	I0914 01:10:58.803860   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:10:58.846151   74039 logs.go:123] Gathering logs for kubelet ...
	I0914 01:10:58.846179   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:10:58.897732   74039 logs.go:123] Gathering logs for dmesg ...
	I0914 01:10:58.897770   74039 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0914 01:10:58.914945   74039 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 01:10:58.914994   74039 out.go:270] * 
	W0914 01:10:58.915064   74039 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.915086   74039 out.go:270] * 
	W0914 01:10:58.915971   74039 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:10:58.919460   74039 out.go:201] 
	W0914 01:10:58.920476   74039 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 01:10:58.920529   74039 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 01:10:58.920551   74039 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 01:10:58.921837   74039 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.425742374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276957425676872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f73aa888-ab04-4d84-9e5f-babca1ffc6b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.426404897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27245c09-4596-4a21-9d34-67741a9a8118 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.426468199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27245c09-4596-4a21-9d34-67741a9a8118 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.426502264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27245c09-4596-4a21-9d34-67741a9a8118 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.458746882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12765edf-6145-4f44-8d6a-0b2d1371dfbf name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.458850434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12765edf-6145-4f44-8d6a-0b2d1371dfbf name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.459948725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d224f6c9-b413-48be-aefe-31855b3e9758 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.460325655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276957460306142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d224f6c9-b413-48be-aefe-31855b3e9758 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.460784722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a05acfe-ae30-4f33-b04e-a1527c0f431b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.460837217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a05acfe-ae30-4f33-b04e-a1527c0f431b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.460880061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a05acfe-ae30-4f33-b04e-a1527c0f431b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.492420659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=699b665d-ef68-40c0-8e7c-c39511c8493a name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.492490272Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=699b665d-ef68-40c0-8e7c-c39511c8493a name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.493473194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34ffed05-4924-4eca-8a93-1b43d37e4ebf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.493942805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276957493916860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34ffed05-4924-4eca-8a93-1b43d37e4ebf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.494536785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b542227c-4677-44b0-a56c-80b9ca169ce5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.494587629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b542227c-4677-44b0-a56c-80b9ca169ce5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.494626790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b542227c-4677-44b0-a56c-80b9ca169ce5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.527056636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=088d0cf2-7a00-4c29-806b-e7c426e13770 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.527150823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=088d0cf2-7a00-4c29-806b-e7c426e13770 name=/runtime.v1.RuntimeService/Version
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.528615945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cca130e-5cf2-418e-ab71-aef53d55c154 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.529110315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276957529087412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cca130e-5cf2-418e-ab71-aef53d55c154 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.529614821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c38764e-d8e2-4071-9166-1df27bfe2c0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.529686560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c38764e-d8e2-4071-9166-1df27bfe2c0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 01:22:37 old-k8s-version-431084 crio[634]: time="2024-09-14 01:22:37.529802000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c38764e-d8e2-4071-9166-1df27bfe2c0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037690] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.969079] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.928925] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.082346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068199] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.169952] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.159964] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.281242] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Sep14 01:03] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309557] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +10.314900] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 01:07] systemd-fstab-generator[5021]: Ignoring "noauto" option for root device
	[Sep14 01:09] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.068389] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:22:37 up 20 min,  0 users,  load average: 0.04, 0.01, 0.01
	Linux old-k8s-version-431084 5.10.207 #1 SMP Fri Sep 13 21:09:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: goroutine 148 [chan receive]:
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0009e47e0)
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: goroutine 149 [select]:
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6def0, 0x4f0ac20, 0xc000a0a640, 0x1, 0xc00009e0c0)
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00014f180, 0xc00009e0c0)
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00093f060, 0xc00094d5c0)
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 14 01:22:34 old-k8s-version-431084 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 14 01:22:35 old-k8s-version-431084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 141.
	Sep 14 01:22:35 old-k8s-version-431084 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 14 01:22:35 old-k8s-version-431084 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 14 01:22:35 old-k8s-version-431084 kubelet[6824]: I0914 01:22:35.165071    6824 server.go:416] Version: v1.20.0
	Sep 14 01:22:35 old-k8s-version-431084 kubelet[6824]: I0914 01:22:35.165436    6824 server.go:837] Client rotation is on, will bootstrap in background
	Sep 14 01:22:35 old-k8s-version-431084 kubelet[6824]: I0914 01:22:35.167612    6824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 14 01:22:35 old-k8s-version-431084 kubelet[6824]: I0914 01:22:35.168939    6824 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 14 01:22:35 old-k8s-version-431084 kubelet[6824]: W0914 01:22:35.169073    6824 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 2 (226.963668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-431084" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.01s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 36.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 16.15
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 80.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 131.51
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 11.84
37 TestAddons/parallel/HelmTiller 11.75
39 TestAddons/parallel/CSI 54.46
40 TestAddons/parallel/Headlamp 16.27
41 TestAddons/parallel/CloudSpanner 6.55
42 TestAddons/parallel/LocalPath 15.19
43 TestAddons/parallel/NvidiaDevicePlugin 5.52
44 TestAddons/parallel/Yakd 10.9
45 TestAddons/StoppedEnableDisable 92.69
46 TestCertOptions 93.86
47 TestCertExpiration 330.79
49 TestForceSystemdFlag 106.58
50 TestForceSystemdEnv 43.8
52 TestKVMDriverInstallOrUpdate 3.87
56 TestErrorSpam/setup 42.18
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.5
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 5.78
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 80.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.01
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
73 TestFunctional/serial/CacheCmd/cache/add_local 2.14
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 274.08
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.08
84 TestFunctional/serial/LogsFileCmd 1.1
85 TestFunctional/serial/InvalidService 4.64
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 34.2
89 TestFunctional/parallel/DryRun 0.31
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 0.83
95 TestFunctional/parallel/ServiceCmdConnect 8.49
96 TestFunctional/parallel/AddonsCmd 0.24
97 TestFunctional/parallel/PersistentVolumeClaim 32.59
99 TestFunctional/parallel/SSHCmd 0.4
100 TestFunctional/parallel/CpCmd 1.24
101 TestFunctional/parallel/MySQL 26.64
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.39
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/ServiceCmd/DeployApp 12.34
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.43
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.07
120 TestFunctional/parallel/ImageCommands/Setup 1.77
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.38
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.47
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
128 TestFunctional/parallel/ImageCommands/ImageRemove 3.12
129 TestFunctional/parallel/ServiceCmd/List 0.92
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.91
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
136 TestFunctional/parallel/ProfileCmd/profile_list 0.43
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
148 TestFunctional/parallel/MountCmd/any-port 11.48
149 TestFunctional/parallel/MountCmd/specific-port 1.99
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 199.6
158 TestMultiControlPlane/serial/DeployApp 6.77
159 TestMultiControlPlane/serial/PingHostFromPods 1.21
160 TestMultiControlPlane/serial/AddWorkerNode 58.11
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.81
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.8
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 352.75
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 76.91
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 78.4
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.67
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.35
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 91.96
211 TestMountStart/serial/StartWithMountFirst 26.3
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 24.44
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.38
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 24.01
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 114.14
223 TestMultiNode/serial/DeployApp2Nodes 5.48
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 47.51
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.88
229 TestMultiNode/serial/StopNode 2.15
230 TestMultiNode/serial/StartAfterStop 40.03
232 TestMultiNode/serial/DeleteNode 2.31
234 TestMultiNode/serial/RestartMultiNode 185.54
235 TestMultiNode/serial/ValidateNameConflict 41.17
242 TestScheduledStopUnix 113.29
246 TestRunningBinaryUpgrade 196.94
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 96.85
260 TestNetworkPlugins/group/false 2.91
264 TestNoKubernetes/serial/StartWithStopK8s 68.64
265 TestNoKubernetes/serial/Start 27.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
267 TestNoKubernetes/serial/ProfileList 0.78
268 TestNoKubernetes/serial/Stop 1.28
269 TestNoKubernetes/serial/StartNoArgs 62.71
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
271 TestStoppedBinaryUpgrade/Setup 2.34
272 TestStoppedBinaryUpgrade/Upgrade 109.18
281 TestPause/serial/Start 94.86
282 TestNetworkPlugins/group/auto/Start 101.31
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
284 TestNetworkPlugins/group/kindnet/Start 114.83
286 TestNetworkPlugins/group/auto/KubeletFlags 0.2
287 TestNetworkPlugins/group/auto/NetCatPod 10.21
288 TestNetworkPlugins/group/auto/DNS 0.16
289 TestNetworkPlugins/group/auto/Localhost 0.13
290 TestNetworkPlugins/group/auto/HairPin 0.14
291 TestNetworkPlugins/group/calico/Start 82.87
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
295 TestNetworkPlugins/group/kindnet/DNS 0.17
296 TestNetworkPlugins/group/kindnet/Localhost 0.13
297 TestNetworkPlugins/group/kindnet/HairPin 0.11
298 TestNetworkPlugins/group/custom-flannel/Start 71.95
299 TestNetworkPlugins/group/enable-default-cni/Start 61.51
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.22
302 TestNetworkPlugins/group/calico/NetCatPod 11.31
303 TestNetworkPlugins/group/calico/DNS 0.6
304 TestNetworkPlugins/group/calico/Localhost 0.18
305 TestNetworkPlugins/group/calico/HairPin 0.17
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
308 TestNetworkPlugins/group/flannel/Start 68.83
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.25
311 TestNetworkPlugins/group/custom-flannel/DNS 0.19
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
314 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
316 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
317 TestNetworkPlugins/group/bridge/Start 61.87
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
322 TestNetworkPlugins/group/flannel/NetCatPod 11.24
324 TestStartStop/group/no-preload/serial/FirstStart 108.62
325 TestNetworkPlugins/group/flannel/DNS 0.19
326 TestNetworkPlugins/group/flannel/Localhost 0.16
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
328 TestNetworkPlugins/group/flannel/HairPin 0.16
329 TestNetworkPlugins/group/bridge/NetCatPod 11.4
330 TestNetworkPlugins/group/bridge/DNS 16.79
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.72
333 TestNetworkPlugins/group/bridge/Localhost 0.15
334 TestNetworkPlugins/group/bridge/HairPin 0.13
336 TestStartStop/group/newest-cni/serial/FirstStart 56.54
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.31
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
342 TestStartStop/group/newest-cni/serial/Stop 10.59
343 TestStartStop/group/no-preload/serial/DeployApp 9.28
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
346 TestStartStop/group/newest-cni/serial/SecondStart 37.02
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
351 TestStartStop/group/newest-cni/serial/Pause 2.22
353 TestStartStop/group/embed-certs/serial/FirstStart 52.42
354 TestStartStop/group/embed-certs/serial/DeployApp 11.25
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 634.03
362 TestStartStop/group/no-preload/serial/SecondStart 604.96
363 TestStartStop/group/old-k8s-version/serial/Stop 2.37
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
367 TestStartStop/group/embed-certs/serial/SecondStart 495.97
x
+
TestDownloadOnly/v1.20.0/json-events (36.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551384 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551384 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (36.361509942s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (36.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551384
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551384: exit status 85 (60.530231ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |          |
	|         | -p download-only-551384        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:25.646321   12614 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:25.646572   12614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:25.646582   12614 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:25.646586   12614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:25.646771   12614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	W0913 23:26:25.646892   12614 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-5422/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-5422/.minikube/config/config.json: no such file or directory
	I0913 23:26:25.647482   12614 out.go:352] Setting JSON to true
	I0913 23:26:25.648379   12614 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":532,"bootTime":1726269454,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:25.648470   12614 start.go:139] virtualization: kvm guest
	I0913 23:26:25.650853   12614 out.go:97] [download-only-551384] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:26:25.650981   12614 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:26:25.651014   12614 notify.go:220] Checking for updates...
	I0913 23:26:25.652334   12614 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:25.653595   12614 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:25.654845   12614 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:26:25.655968   12614 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:26:25.657167   12614 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 23:26:25.659008   12614 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:26:25.659323   12614 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:25.763512   12614 out.go:97] Using the kvm2 driver based on user configuration
	I0913 23:26:25.763547   12614 start.go:297] selected driver: kvm2
	I0913 23:26:25.763554   12614 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:26:25.763925   12614 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:26:25.764058   12614 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:26:25.780021   12614 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:26:25.780090   12614 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:25.780650   12614 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0913 23:26:25.780815   12614 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:26:25.780844   12614 cni.go:84] Creating CNI manager for ""
	I0913 23:26:25.780886   12614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:26:25.780894   12614 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:26:25.780952   12614 start.go:340] cluster config:
	{Name:download-only-551384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-551384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:25.781149   12614 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:26:25.783515   12614 out.go:97] Downloading VM boot image ...
	I0913 23:26:25.783566   12614 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/iso/amd64/minikube-v1.34.0-1726243933-19640-amd64.iso
	I0913 23:26:47.057405   12614 out.go:97] Starting "download-only-551384" primary control-plane node in "download-only-551384" cluster
	I0913 23:26:47.057432   12614 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 23:26:47.153407   12614 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 23:26:47.153439   12614 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:47.153605   12614 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0913 23:26:47.155823   12614 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 23:26:47.155847   12614 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0913 23:26:47.256891   12614 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:00.289317   12614 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0913 23:27:00.289415   12614 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-551384 host does not exist
	  To start a cluster, run: "minikube start -p download-only-551384"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-551384
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (16.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-763760 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-763760 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.1503586s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (16.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-763760
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-763760: exit status 85 (56.150095ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-551384        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| delete  | -p download-only-551384        | download-only-551384 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| start   | -o=json --download-only        | download-only-763760 | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	|         | -p download-only-763760        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:27:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:27:02.334772   12903 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:27:02.334904   12903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:02.334913   12903 out.go:358] Setting ErrFile to fd 2...
	I0913 23:27:02.334922   12903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:02.335093   12903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:27:02.335652   12903 out.go:352] Setting JSON to true
	I0913 23:27:02.336485   12903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":568,"bootTime":1726269454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:27:02.336583   12903 start.go:139] virtualization: kvm guest
	I0913 23:27:02.338862   12903 out.go:97] [download-only-763760] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:27:02.339000   12903 notify.go:220] Checking for updates...
	I0913 23:27:02.340421   12903 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:27:02.341747   12903 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:27:02.343015   12903 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:27:02.344303   12903 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:27:02.345507   12903 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 23:27:02.347741   12903 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:27:02.348027   12903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:27:02.381826   12903 out.go:97] Using the kvm2 driver based on user configuration
	I0913 23:27:02.381856   12903 start.go:297] selected driver: kvm2
	I0913 23:27:02.381862   12903 start.go:901] validating driver "kvm2" against <nil>
	I0913 23:27:02.382185   12903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:02.382305   12903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19640-5422/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 23:27:02.398213   12903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 23:27:02.398266   12903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:27:02.398750   12903 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0913 23:27:02.398901   12903 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:27:02.398929   12903 cni.go:84] Creating CNI manager for ""
	I0913 23:27:02.398975   12903 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0913 23:27:02.398985   12903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:27:02.399058   12903 start.go:340] cluster config:
	{Name:download-only-763760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-763760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:02.399139   12903 iso.go:125] acquiring lock: {Name:mk847265edcfde6c31d5f8c0ad489f5e3ef3fe23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:27:02.401002   12903 out.go:97] Starting "download-only-763760" primary control-plane node in "download-only-763760" cluster
	I0913 23:27:02.401029   12903 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:02.504840   12903 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:02.504898   12903 cache.go:56] Caching tarball of preloaded images
	I0913 23:27:02.505056   12903 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0913 23:27:02.506924   12903 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 23:27:02.506953   12903 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0913 23:27:02.608099   12903 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0913 23:27:16.867868   12903 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0913 23:27:16.867979   12903 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-5422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-763760 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763760"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-763760
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-510431 --alsologtostderr --binary-mirror http://127.0.0.1:40845 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-510431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-510431
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (80.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-363063 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-363063 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.752940728s)
helpers_test.go:175: Cleaning up "offline-crio-363063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-363063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-363063: (1.007826105s)
--- PASS: TestOffline (80.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-473197
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-473197: exit status 85 (51.812849ms)

                                                
                                                
-- stdout --
	* Profile "addons-473197" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-473197"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-473197
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-473197: exit status 85 (48.441445ms)

                                                
                                                
-- stdout --
	* Profile "addons-473197" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-473197"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (131.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-473197 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-473197 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m11.506222364s)
--- PASS: TestAddons/Setup (131.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-473197 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-473197 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j8mlw" [b7572606-3cad-409a-972f-b2b55316514e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00455804s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-473197
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-473197: (5.833302444s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.472141ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-nnd7j" [f2148f01-98eb-4544-82d0-4569d22426e2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005500266s
addons_test.go:475: (dbg) Run:  kubectl --context addons-473197 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-473197 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.917607737s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.214432ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-473197 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-473197 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e6b7c0a9-2479-447b-ba75-d817cfe759fc] Pending
helpers_test.go:344: "task-pv-pod" [e6b7c0a9-2479-447b-ba75-d817cfe759fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e6b7c0a9-2479-447b-ba75-d817cfe759fc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003649796s
addons_test.go:590: (dbg) Run:  kubectl --context addons-473197 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-473197 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-473197 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-473197 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-473197 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-473197 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-473197 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7f1ceabf-f8ab-4039-9254-0ec07ce53794] Pending
helpers_test.go:344: "task-pv-pod-restore" [7f1ceabf-f8ab-4039-9254-0ec07ce53794] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7f1ceabf-f8ab-4039-9254-0ec07ce53794] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004560915s
addons_test.go:632: (dbg) Run:  kubectl --context addons-473197 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-473197 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-473197 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.890182532s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 addons disable volumesnapshots --alsologtostderr -v=1: (1.223788651s)
--- PASS: TestAddons/parallel/CSI (54.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-473197 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-z5dzh" [8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-z5dzh" [8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-z5dzh" [8ccd5ac4-ddd3-46db-82c7-d1bc06a86aeb] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.005419694s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (16.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-c2qmr" [49fb792b-79f9-425a-8db5-490e4636fbbc] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003825214s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-473197
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-473197 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-473197 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bfb3767d-477f-4e5a-a747-acb9162d74fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bfb3767d-477f-4e5a-a747-acb9162d74fc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bfb3767d-477f-4e5a-a747-acb9162d74fc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.005213338s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-473197 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 ssh "cat /opt/local-path-provisioner/pvc-ae2e21c9-b520-422d-b18a-7f6a58ec0099_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-473197 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-473197 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vfb4s" [60b55c3e-69a3-4722-8cb3-0e216d168ee8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.018284585s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-473197
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pd4zj" [c25c6c88-ca33-4a8d-a3fb-09dd1269e2c6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003870792s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-473197 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-473197 addons disable yakd --alsologtostderr -v=1: (5.899490049s)
--- PASS: TestAddons/parallel/Yakd (10.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-473197
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-473197: (1m32.420617338s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-473197
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-473197
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-473197
--- PASS: TestAddons/StoppedEnableDisable (92.69s)

                                                
                                    
x
+
TestCertOptions (93.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-408937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-408937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m32.374870752s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-408937 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-408937 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-408937 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-408937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-408937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-408937: (1.012236274s)
--- PASS: TestCertOptions (93.86s)

                                                
                                    
x
+
TestCertExpiration (330.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-554954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0914 00:42:20.623700   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-554954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m50.750932333s)
E0914 00:44:31.535186   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-554954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0914 00:47:20.624363   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-554954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.961450257s)
helpers_test.go:175: Cleaning up "cert-expiration-554954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-554954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-554954: (1.079545433s)
--- PASS: TestCertExpiration (330.79s)

                                                
                                    
x
+
TestForceSystemdFlag (106.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-070238 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-070238 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m45.569474971s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-070238 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-070238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-070238
--- PASS: TestForceSystemdFlag (106.58s)

                                                
                                    
x
+
TestForceSystemdEnv (43.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-451535 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-451535 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.809493815s)
helpers_test.go:175: Cleaning up "force-systemd-env-451535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-451535
--- PASS: TestForceSystemdEnv (43.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.87s)

                                                
                                    
x
+
TestErrorSpam/setup (42.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-916082 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916082 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-916082 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916082 --driver=kvm2  --container-runtime=crio: (42.175229073s)
--- PASS: TestErrorSpam/setup (42.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop: (2.340886551s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop: (1.98009495s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-916082 --log_dir /tmp/nospam-916082 stop: (1.462974095s)
--- PASS: TestErrorSpam/stop (5.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-5422/.minikube/files/etc/test/nested/copy/12602/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-383860 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.817343213s)
--- PASS: TestFunctional/serial/StartWithProxy (80.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-383860 --alsologtostderr -v=8: (41.00580355s)
functional_test.go:663: soft start took 41.006528141s for "functional-383860" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-383860 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:3.1: (1.24927098s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:3.3: (1.305055542s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 cache add registry.k8s.io/pause:latest: (1.228785491s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-383860 /tmp/TestFunctionalserialCacheCmdcacheadd_local1977529091/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache add minikube-local-cache-test:functional-383860
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 cache add minikube-local-cache-test:functional-383860: (1.783280088s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache delete minikube-local-cache-test:functional-383860
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-383860
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.43893ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 cache reload: (1.030562671s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 kubectl -- --context functional-383860 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-383860 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (274.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0913 23:49:31.538894   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.545725   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.557071   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.578525   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.619991   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.701449   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:31.862994   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:32.184910   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:32.826961   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:34.108991   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:36.671009   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:41.792383   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:52.033806   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:12.516076   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:53.478526   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-383860 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m34.078105584s)
functional_test.go:761: restart took 4m34.078244025s for "functional-383860" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (274.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-383860 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 logs: (1.079595036s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 logs --file /tmp/TestFunctionalserialLogsFileCmd2202931615/001/logs.txt
E0913 23:52:15.400422   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 logs --file /tmp/TestFunctionalserialLogsFileCmd2202931615/001/logs.txt: (1.095894323s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-383860 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-383860
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-383860: exit status 115 (262.985082ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.156:32092 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-383860 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-383860 delete -f testdata/invalidsvc.yaml: (1.192265161s)
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 config get cpus: exit status 14 (59.031404ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 config get cpus: exit status 14 (53.533909ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (34.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383860 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383860 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23050: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (34.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383860 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.764353ms)

                                                
                                                
-- stdout --
	* [functional-383860] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:52:35.924063   23550 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:52:35.924175   23550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:52:35.924183   23550 out.go:358] Setting ErrFile to fd 2...
	I0913 23:52:35.924188   23550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:52:35.924370   23550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:52:35.924875   23550 out.go:352] Setting JSON to false
	I0913 23:52:35.925724   23550 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2102,"bootTime":1726269454,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:52:35.925819   23550 start.go:139] virtualization: kvm guest
	I0913 23:52:35.928238   23550 out.go:177] * [functional-383860] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 23:52:35.930146   23550 notify.go:220] Checking for updates...
	I0913 23:52:35.930177   23550 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:52:35.932245   23550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:52:35.934088   23550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:52:35.935859   23550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:52:35.937668   23550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:52:35.939304   23550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:52:35.941300   23550 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:52:35.941970   23550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:52:35.942059   23550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:52:35.958373   23550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41235
	I0913 23:52:35.958879   23550 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:52:35.959597   23550 main.go:141] libmachine: Using API Version  1
	I0913 23:52:35.959635   23550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:52:35.960056   23550 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:52:35.960242   23550 main.go:141] libmachine: (functional-383860) Calling .DriverName
	I0913 23:52:35.960562   23550 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:52:35.960894   23550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:52:35.960935   23550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:52:35.977046   23550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0913 23:52:35.977657   23550 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:52:35.978316   23550 main.go:141] libmachine: Using API Version  1
	I0913 23:52:35.978347   23550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:52:35.978783   23550 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:52:35.979003   23550 main.go:141] libmachine: (functional-383860) Calling .DriverName
	I0913 23:52:36.017562   23550 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 23:52:36.019999   23550 start.go:297] selected driver: kvm2
	I0913 23:52:36.020033   23550 start.go:901] validating driver "kvm2" against &{Name:functional-383860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-383860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:52:36.020236   23550 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:52:36.023957   23550 out.go:201] 
	W0913 23:52:36.025745   23550 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 23:52:36.028271   23550 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383860 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383860 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.163642ms)

                                                
                                                
-- stdout --
	* [functional-383860] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:52:35.758182   23523 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:52:35.758307   23523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:52:35.758313   23523 out.go:358] Setting ErrFile to fd 2...
	I0913 23:52:35.758318   23523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:52:35.758645   23523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0913 23:52:35.759217   23523 out.go:352] Setting JSON to false
	I0913 23:52:35.760235   23523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2102,"bootTime":1726269454,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:52:35.760331   23523 start.go:139] virtualization: kvm guest
	I0913 23:52:35.763131   23523 out.go:177] * [functional-383860] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0913 23:52:35.764757   23523 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:52:35.764807   23523 notify.go:220] Checking for updates...
	I0913 23:52:35.768384   23523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:52:35.769920   23523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0913 23:52:35.771618   23523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0913 23:52:35.773424   23523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:52:35.775328   23523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:52:35.777320   23523 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0913 23:52:35.777925   23523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:52:35.778012   23523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:52:35.794015   23523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0913 23:52:35.794410   23523 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:52:35.794966   23523 main.go:141] libmachine: Using API Version  1
	I0913 23:52:35.794987   23523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:52:35.795387   23523 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:52:35.795596   23523 main.go:141] libmachine: (functional-383860) Calling .DriverName
	I0913 23:52:35.795872   23523 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:52:35.796323   23523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0913 23:52:35.796369   23523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 23:52:35.813450   23523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0913 23:52:35.813897   23523 main.go:141] libmachine: () Calling .GetVersion
	I0913 23:52:35.814460   23523 main.go:141] libmachine: Using API Version  1
	I0913 23:52:35.814486   23523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 23:52:35.814887   23523 main.go:141] libmachine: () Calling .GetMachineName
	I0913 23:52:35.815142   23523 main.go:141] libmachine: (functional-383860) Calling .DriverName
	I0913 23:52:35.853851   23523 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0913 23:52:35.855760   23523 start.go:297] selected driver: kvm2
	I0913 23:52:35.855810   23523 start.go:901] validating driver "kvm2" against &{Name:functional-383860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-383860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:52:35.855980   23523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:52:35.859878   23523 out.go:201] 
	W0913 23:52:35.862302   23523 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 23:52:35.866031   23523 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-383860 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-383860 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mwcrb" [4de03683-3228-486d-ae7d-416565ef7d6a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mwcrb" [4de03683-3228-486d-ae7d-416565ef7d6a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006305306s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.156:32639
functional_test.go:1675: http://192.168.39.156:32639: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-mwcrb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.156:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.156:32639
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2c161523-dd18-4dc5-827f-d15a9911617b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01692736s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-383860 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-383860 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-383860 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-383860 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-383860 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [07e32884-68b0-4ec6-aa7a-10ebb745614f] Pending
helpers_test.go:344: "sp-pod" [07e32884-68b0-4ec6-aa7a-10ebb745614f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [07e32884-68b0-4ec6-aa7a-10ebb745614f] Running
2024/09/13 23:52:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004872356s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-383860 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-383860 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-383860 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [86d67ba6-a177-4d25-812a-23e56ac77d2e] Pending
helpers_test.go:344: "sp-pod" [86d67ba6-a177-4d25-812a-23e56ac77d2e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [86d67ba6-a177-4d25-812a-23e56ac77d2e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004120177s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-383860 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh -n functional-383860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cp functional-383860:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4005520286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh -n functional-383860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh -n functional-383860 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-383860 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gx55j" [d6c5a1fa-f399-4716-aa3b-8c9a5c9db758] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gx55j" [d6c5a1fa-f399-4716-aa3b-8c9a5c9db758] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003932967s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-383860 exec mysql-6cdb49bbb-gx55j -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-383860 exec mysql-6cdb49bbb-gx55j -- mysql -ppassword -e "show databases;": exit status 1 (154.283784ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-383860 exec mysql-6cdb49bbb-gx55j -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-383860 exec mysql-6cdb49bbb-gx55j -- mysql -ppassword -e "show databases;": exit status 1 (171.436506ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-383860 exec mysql-6cdb49bbb-gx55j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12602/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /etc/test/nested/copy/12602/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12602.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /etc/ssl/certs/12602.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12602.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /usr/share/ca-certificates/12602.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/126022.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /etc/ssl/certs/126022.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/126022.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /usr/share/ca-certificates/126022.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-383860 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "sudo systemctl is-active docker": exit status 1 (210.133029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "sudo systemctl is-active containerd": exit status 1 (205.39428ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-383860 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-383860 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jbjq2" [ef9330e5-c6f6-4153-83c4-9bfeecd2d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jbjq2" [ef9330e5-c6f6-4153-83c4-9bfeecd2d9a2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.139450062s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383860 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-383860
localhost/kicbase/echo-server:functional-383860
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383860 image ls --format short --alsologtostderr:
I0913 23:52:56.694350   24791 out.go:345] Setting OutFile to fd 1 ...
I0913 23:52:56.694450   24791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:56.694458   24791 out.go:358] Setting ErrFile to fd 2...
I0913 23:52:56.694462   24791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:56.694705   24791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
I0913 23:52:56.695302   24791 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:56.695402   24791 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:56.695768   24791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:56.695827   24791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:56.710630   24791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
I0913 23:52:56.711082   24791 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:56.711684   24791 main.go:141] libmachine: Using API Version  1
I0913 23:52:56.711708   24791 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:56.712122   24791 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:56.712323   24791 main.go:141] libmachine: (functional-383860) Calling .GetState
I0913 23:52:56.714261   24791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:56.714310   24791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:56.728673   24791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
I0913 23:52:56.729097   24791 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:56.729607   24791 main.go:141] libmachine: Using API Version  1
I0913 23:52:56.729626   24791 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:56.729914   24791 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:56.730077   24791 main.go:141] libmachine: (functional-383860) Calling .DriverName
I0913 23:52:56.730271   24791 ssh_runner.go:195] Run: systemctl --version
I0913 23:52:56.730303   24791 main.go:141] libmachine: (functional-383860) Calling .GetSSHHostname
I0913 23:52:56.733753   24791 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:56.734196   24791 main.go:141] libmachine: (functional-383860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:bb:73", ip: ""} in network mk-functional-383860: {Iface:virbr1 ExpiryTime:2024-09-14 00:45:43 +0000 UTC Type:0 Mac:52:54:00:f8:bb:73 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:functional-383860 Clientid:01:52:54:00:f8:bb:73}
I0913 23:52:56.734228   24791 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined IP address 192.168.39.156 and MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:56.734468   24791 main.go:141] libmachine: (functional-383860) Calling .GetSSHPort
I0913 23:52:56.734666   24791 main.go:141] libmachine: (functional-383860) Calling .GetSSHKeyPath
I0913 23:52:56.734802   24791 main.go:141] libmachine: (functional-383860) Calling .GetSSHUsername
I0913 23:52:56.735013   24791 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/functional-383860/id_rsa Username:docker}
I0913 23:52:56.818322   24791 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 23:52:56.871128   24791 main.go:141] libmachine: Making call to close driver server
I0913 23:52:56.871145   24791 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:56.871407   24791 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:56.871429   24791 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 23:52:56.871443   24791 main.go:141] libmachine: Making call to close driver server
I0913 23:52:56.871451   24791 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:56.871678   24791 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:56.871715   24791 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:56.871725   24791 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383860 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-383860  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-383860  | 3679a66d34995 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383860 image ls --format table --alsologtostderr:
I0913 23:52:57.317289   24930 out.go:345] Setting OutFile to fd 1 ...
I0913 23:52:57.317419   24930 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.317430   24930 out.go:358] Setting ErrFile to fd 2...
I0913 23:52:57.317437   24930 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.317643   24930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
I0913 23:52:57.318294   24930 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.318435   24930 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.319016   24930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.319061   24930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.335903   24930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34595
I0913 23:52:57.336502   24930 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.337182   24930 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.337198   24930 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.337600   24930 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.337766   24930 main.go:141] libmachine: (functional-383860) Calling .GetState
I0913 23:52:57.340056   24930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.340098   24930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.355165   24930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
I0913 23:52:57.355715   24930 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.356670   24930 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.356724   24930 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.357041   24930 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.357213   24930 main.go:141] libmachine: (functional-383860) Calling .DriverName
I0913 23:52:57.357423   24930 ssh_runner.go:195] Run: systemctl --version
I0913 23:52:57.357447   24930 main.go:141] libmachine: (functional-383860) Calling .GetSSHHostname
I0913 23:52:57.360178   24930 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.360673   24930 main.go:141] libmachine: (functional-383860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:bb:73", ip: ""} in network mk-functional-383860: {Iface:virbr1 ExpiryTime:2024-09-14 00:45:43 +0000 UTC Type:0 Mac:52:54:00:f8:bb:73 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:functional-383860 Clientid:01:52:54:00:f8:bb:73}
I0913 23:52:57.360708   24930 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined IP address 192.168.39.156 and MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.360840   24930 main.go:141] libmachine: (functional-383860) Calling .GetSSHPort
I0913 23:52:57.360991   24930 main.go:141] libmachine: (functional-383860) Calling .GetSSHKeyPath
I0913 23:52:57.361130   24930 main.go:141] libmachine: (functional-383860) Calling .GetSSHUsername
I0913 23:52:57.361279   24930 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/functional-383860/id_rsa Username:docker}
I0913 23:52:57.442016   24930 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 23:52:57.493006   24930 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.493031   24930 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.493338   24930 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.493356   24930 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 23:52:57.493372   24930 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:57.493378   24930 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.493408   24930 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.493607   24930 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:57.493616   24930 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.493674   24930 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383860 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3
ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"3679a66d34995a7aa897fbad89f5ee62098fdfa6a8b13cf4bd0de44dbd99024c","repoDigests":["localhost/minikube-local-cache-test@sha256:51fa69011fc6080371c583bb7eadb1c949dae5b77359723a79ebffc6cd306ee6"],"repoTags":["localhost/minikube-local-cache-test:functional-383860"],"size":"3326"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30a
a9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":[
"docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5
617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-383860"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9aa1fad941575eed91ab13d44f3e4cb
5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/
pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383860 image ls --format json --alsologtostderr:
I0913 23:52:57.148472   24888 out.go:345] Setting OutFile to fd 1 ...
I0913 23:52:57.148752   24888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.148763   24888 out.go:358] Setting ErrFile to fd 2...
I0913 23:52:57.148767   24888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.148944   24888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
I0913 23:52:57.149517   24888 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.149614   24888 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.149965   24888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.150005   24888 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.166595   24888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
I0913 23:52:57.167104   24888 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.167708   24888 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.167731   24888 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.168220   24888 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.168376   24888 main.go:141] libmachine: (functional-383860) Calling .GetState
I0913 23:52:57.170596   24888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.170632   24888 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.186592   24888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
I0913 23:52:57.187075   24888 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.187603   24888 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.187627   24888 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.188058   24888 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.188298   24888 main.go:141] libmachine: (functional-383860) Calling .DriverName
I0913 23:52:57.188525   24888 ssh_runner.go:195] Run: systemctl --version
I0913 23:52:57.188552   24888 main.go:141] libmachine: (functional-383860) Calling .GetSSHHostname
I0913 23:52:57.191546   24888 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.192030   24888 main.go:141] libmachine: (functional-383860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:bb:73", ip: ""} in network mk-functional-383860: {Iface:virbr1 ExpiryTime:2024-09-14 00:45:43 +0000 UTC Type:0 Mac:52:54:00:f8:bb:73 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:functional-383860 Clientid:01:52:54:00:f8:bb:73}
I0913 23:52:57.192064   24888 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined IP address 192.168.39.156 and MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.192248   24888 main.go:141] libmachine: (functional-383860) Calling .GetSSHPort
I0913 23:52:57.192436   24888 main.go:141] libmachine: (functional-383860) Calling .GetSSHKeyPath
I0913 23:52:57.192617   24888 main.go:141] libmachine: (functional-383860) Calling .GetSSHUsername
I0913 23:52:57.192776   24888 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/functional-383860/id_rsa Username:docker}
I0913 23:52:57.274178   24888 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 23:52:57.320475   24888 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.320491   24888 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.320753   24888 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.320771   24888 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 23:52:57.320781   24888 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.320788   24888 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:57.320793   24888 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.321048   24888 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:57.321084   24888 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.321103   24888 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383860 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-383860
size: "4943877"
- id: 3679a66d34995a7aa897fbad89f5ee62098fdfa6a8b13cf4bd0de44dbd99024c
repoDigests:
- localhost/minikube-local-cache-test@sha256:51fa69011fc6080371c583bb7eadb1c949dae5b77359723a79ebffc6cd306ee6
repoTags:
- localhost/minikube-local-cache-test:functional-383860
size: "3326"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383860 image ls --format yaml --alsologtostderr:
I0913 23:52:56.917239   24841 out.go:345] Setting OutFile to fd 1 ...
I0913 23:52:56.917368   24841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:56.917378   24841 out.go:358] Setting ErrFile to fd 2...
I0913 23:52:56.917387   24841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:56.917589   24841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
I0913 23:52:56.918160   24841 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:56.918266   24841 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:56.918644   24841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:56.918687   24841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:56.935521   24841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
I0913 23:52:56.935991   24841 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:56.936548   24841 main.go:141] libmachine: Using API Version  1
I0913 23:52:56.936570   24841 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:56.936944   24841 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:56.937124   24841 main.go:141] libmachine: (functional-383860) Calling .GetState
I0913 23:52:56.939135   24841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:56.939171   24841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:56.954177   24841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
I0913 23:52:56.954617   24841 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:56.955162   24841 main.go:141] libmachine: Using API Version  1
I0913 23:52:56.955186   24841 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:56.955537   24841 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:56.955719   24841 main.go:141] libmachine: (functional-383860) Calling .DriverName
I0913 23:52:56.955932   24841 ssh_runner.go:195] Run: systemctl --version
I0913 23:52:56.955970   24841 main.go:141] libmachine: (functional-383860) Calling .GetSSHHostname
I0913 23:52:56.959390   24841 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:56.959827   24841 main.go:141] libmachine: (functional-383860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:bb:73", ip: ""} in network mk-functional-383860: {Iface:virbr1 ExpiryTime:2024-09-14 00:45:43 +0000 UTC Type:0 Mac:52:54:00:f8:bb:73 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:functional-383860 Clientid:01:52:54:00:f8:bb:73}
I0913 23:52:56.959858   24841 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined IP address 192.168.39.156 and MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:56.959997   24841 main.go:141] libmachine: (functional-383860) Calling .GetSSHPort
I0913 23:52:56.960158   24841 main.go:141] libmachine: (functional-383860) Calling .GetSSHKeyPath
I0913 23:52:56.960437   24841 main.go:141] libmachine: (functional-383860) Calling .GetSSHUsername
I0913 23:52:56.960672   24841 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/functional-383860/id_rsa Username:docker}
I0913 23:52:57.044412   24841 ssh_runner.go:195] Run: sudo crictl images --output json
I0913 23:52:57.097058   24841 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.097086   24841 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.097456   24841 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:52:57.097497   24841 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.097509   24841 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 23:52:57.097536   24841 main.go:141] libmachine: Making call to close driver server
I0913 23:52:57.097548   24841 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:52:57.097977   24841 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:52:57.097997   24841 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh pgrep buildkitd: exit status 1 (199.083036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image build -t localhost/my-image:functional-383860 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 image build -t localhost/my-image:functional-383860 testdata/build --alsologtostderr: (3.656890163s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383860 image build -t localhost/my-image:functional-383860 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fdef81c0066
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-383860
--> b82a0b10f1d
Successfully tagged localhost/my-image:functional-383860
b82a0b10f1d2c9e33db2969d4447d5a05ca3bc75b525afba42f0bd2b90a2344b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383860 image build -t localhost/my-image:functional-383860 testdata/build --alsologtostderr:
I0913 23:52:57.318685   24931 out.go:345] Setting OutFile to fd 1 ...
I0913 23:52:57.318801   24931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.318806   24931 out.go:358] Setting ErrFile to fd 2...
I0913 23:52:57.318810   24931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:52:57.318964   24931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
I0913 23:52:57.319614   24931 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.320171   24931 config.go:182] Loaded profile config "functional-383860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0913 23:52:57.320587   24931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.320615   24931 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.336570   24931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
I0913 23:52:57.337043   24931 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.337752   24931 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.337779   24931 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.338148   24931 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.338335   24931 main.go:141] libmachine: (functional-383860) Calling .GetState
I0913 23:52:57.340254   24931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0913 23:52:57.340282   24931 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 23:52:57.355295   24931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
I0913 23:52:57.355670   24931 main.go:141] libmachine: () Calling .GetVersion
I0913 23:52:57.356142   24931 main.go:141] libmachine: Using API Version  1
I0913 23:52:57.356188   24931 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 23:52:57.356527   24931 main.go:141] libmachine: () Calling .GetMachineName
I0913 23:52:57.356704   24931 main.go:141] libmachine: (functional-383860) Calling .DriverName
I0913 23:52:57.356870   24931 ssh_runner.go:195] Run: systemctl --version
I0913 23:52:57.356906   24931 main.go:141] libmachine: (functional-383860) Calling .GetSSHHostname
I0913 23:52:57.360041   24931 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.360491   24931 main.go:141] libmachine: (functional-383860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:bb:73", ip: ""} in network mk-functional-383860: {Iface:virbr1 ExpiryTime:2024-09-14 00:45:43 +0000 UTC Type:0 Mac:52:54:00:f8:bb:73 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:functional-383860 Clientid:01:52:54:00:f8:bb:73}
I0913 23:52:57.360519   24931 main.go:141] libmachine: (functional-383860) DBG | domain functional-383860 has defined IP address 192.168.39.156 and MAC address 52:54:00:f8:bb:73 in network mk-functional-383860
I0913 23:52:57.360688   24931 main.go:141] libmachine: (functional-383860) Calling .GetSSHPort
I0913 23:52:57.360843   24931 main.go:141] libmachine: (functional-383860) Calling .GetSSHKeyPath
I0913 23:52:57.360970   24931 main.go:141] libmachine: (functional-383860) Calling .GetSSHUsername
I0913 23:52:57.361119   24931 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/functional-383860/id_rsa Username:docker}
I0913 23:52:57.441894   24931 build_images.go:161] Building image from path: /tmp/build.2241882828.tar
I0913 23:52:57.441963   24931 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 23:52:57.460292   24931 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2241882828.tar
I0913 23:52:57.467420   24931 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2241882828.tar: stat -c "%s %y" /var/lib/minikube/build/build.2241882828.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2241882828.tar': No such file or directory
I0913 23:52:57.467459   24931 ssh_runner.go:362] scp /tmp/build.2241882828.tar --> /var/lib/minikube/build/build.2241882828.tar (3072 bytes)
I0913 23:52:57.506134   24931 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2241882828
I0913 23:52:57.516025   24931 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2241882828 -xf /var/lib/minikube/build/build.2241882828.tar
I0913 23:52:57.526092   24931 crio.go:315] Building image: /var/lib/minikube/build/build.2241882828
I0913 23:52:57.526147   24931 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-383860 /var/lib/minikube/build/build.2241882828 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0913 23:53:00.890201   24931 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-383860 /var/lib/minikube/build/build.2241882828 --cgroup-manager=cgroupfs: (3.364029486s)
I0913 23:53:00.890286   24931 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2241882828
I0913 23:53:00.903300   24931 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2241882828.tar
I0913 23:53:00.913643   24931 build_images.go:217] Built localhost/my-image:functional-383860 from /tmp/build.2241882828.tar
I0913 23:53:00.913680   24931 build_images.go:133] succeeded building to: functional-383860
I0913 23:53:00.913687   24931 build_images.go:134] failed building to: 
I0913 23:53:00.913716   24931 main.go:141] libmachine: Making call to close driver server
I0913 23:53:00.913728   24931 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:53:00.914016   24931 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:53:00.914035   24931 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:53:00.914047   24931 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 23:53:00.914062   24931 main.go:141] libmachine: Making call to close driver server
I0913 23:53:00.914070   24931 main.go:141] libmachine: (functional-383860) Calling .Close
I0913 23:53:00.914306   24931 main.go:141] libmachine: Successfully made call to close driver server
I0913 23:53:00.914425   24931 main.go:141] libmachine: (functional-383860) DBG | Closing plugin on server side
I0913 23:53:00.914472   24931 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.749317122s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-383860
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image load --daemon kicbase/echo-server:functional-383860 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 image load --daemon kicbase/echo-server:functional-383860 --alsologtostderr: (3.133610701s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image load --daemon kicbase/echo-server:functional-383860 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-383860
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image load --daemon kicbase/echo-server:functional-383860 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image save kicbase/echo-server:functional-383860 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image rm kicbase/echo-server:functional-383860 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 image rm kicbase/echo-server:functional-383860 --alsologtostderr: (2.871460421s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service list -o json
functional_test.go:1494: Took "923.315946ms" to run "out/minikube-linux-amd64 -p functional-383860 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-383860 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.656967607s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.156:30401
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.156:30401
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "383.35185ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.761626ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "279.450071ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.26508ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-383860
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 image save --daemon kicbase/echo-server:functional-383860 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-383860
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdany-port806020118/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726271561011358069" to /tmp/TestFunctionalparallelMountCmdany-port806020118/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726271561011358069" to /tmp/TestFunctionalparallelMountCmdany-port806020118/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726271561011358069" to /tmp/TestFunctionalparallelMountCmdany-port806020118/001/test-1726271561011358069
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.796277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 23:52 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 23:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 23:52 test-1726271561011358069
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh cat /mount-9p/test-1726271561011358069
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-383860 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae8d93e9-9727-49f7-bf8c-9d2f30db11a7] Pending
helpers_test.go:344: "busybox-mount" [ae8d93e9-9727-49f7-bf8c-9d2f30db11a7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae8d93e9-9727-49f7-bf8c-9d2f30db11a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae8d93e9-9727-49f7-bf8c-9d2f30db11a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.010353723s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-383860 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdany-port806020118/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdspecific-port4086651385/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.207106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdspecific-port4086651385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "sudo umount -f /mount-9p": exit status 1 (206.456972ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-383860 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdspecific-port4086651385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T" /mount1: exit status 1 (260.770574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383860 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-383860 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383860 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1963032507/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-383860
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-383860
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-383860
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-817269 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0913 23:54:31.535296   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:54:59.242461   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-817269 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.917929191s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-817269 -- rollout status deployment/busybox: (4.600031581s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-5cbmn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-vsts4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-wff9f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-5cbmn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-vsts4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-wff9f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-5cbmn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-vsts4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-wff9f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-5cbmn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-5cbmn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-vsts4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-vsts4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-wff9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-817269 -- exec busybox-7dff88458-wff9f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-817269 -v=7 --alsologtostderr
E0913 23:57:20.623824   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.630216   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.641634   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.663090   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.704613   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.786139   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:20.947697   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:21.269685   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:21.911454   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:23.193414   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:25.754807   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:57:30.876899   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-817269 -v=7 --alsologtostderr: (57.264547291s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-817269 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp testdata/cp-test.txt ha-817269:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269:/home/docker/cp-test.txt ha-817269-m02:/home/docker/cp-test_ha-817269_ha-817269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test_ha-817269_ha-817269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269:/home/docker/cp-test.txt ha-817269-m03:/home/docker/cp-test_ha-817269_ha-817269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test_ha-817269_ha-817269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269:/home/docker/cp-test.txt ha-817269-m04:/home/docker/cp-test_ha-817269_ha-817269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test_ha-817269_ha-817269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp testdata/cp-test.txt ha-817269-m02:/home/docker/cp-test.txt
E0913 23:57:41.118774   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m02:/home/docker/cp-test.txt ha-817269:/home/docker/cp-test_ha-817269-m02_ha-817269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test_ha-817269-m02_ha-817269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m02:/home/docker/cp-test.txt ha-817269-m03:/home/docker/cp-test_ha-817269-m02_ha-817269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test_ha-817269-m02_ha-817269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m02:/home/docker/cp-test.txt ha-817269-m04:/home/docker/cp-test_ha-817269-m02_ha-817269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test_ha-817269-m02_ha-817269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp testdata/cp-test.txt ha-817269-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt ha-817269:/home/docker/cp-test_ha-817269-m03_ha-817269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test_ha-817269-m03_ha-817269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt ha-817269-m02:/home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test_ha-817269-m03_ha-817269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m03:/home/docker/cp-test.txt ha-817269-m04:/home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test_ha-817269-m03_ha-817269-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp testdata/cp-test.txt ha-817269-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile889243572/001/cp-test_ha-817269-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt ha-817269:/home/docker/cp-test_ha-817269-m04_ha-817269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269 "sudo cat /home/docker/cp-test_ha-817269-m04_ha-817269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt ha-817269-m02:/home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m02 "sudo cat /home/docker/cp-test_ha-817269-m04_ha-817269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 cp ha-817269-m04:/home/docker/cp-test.txt ha-817269-m03:/home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 ssh -n ha-817269-m03 "sudo cat /home/docker/cp-test_ha-817269-m04_ha-817269-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.474226439s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 node delete m03 -v=7 --alsologtostderr
E0914 00:07:20.624459   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-817269 node delete m03 -v=7 --alsologtostderr: (16.074230618s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (352.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-817269 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 00:12:20.626048   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:13:43.689143   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:14:31.535134   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-817269 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m51.992632522s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (352.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-817269 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-817269 --control-plane -v=7 --alsologtostderr: (1m16.061653222s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-817269 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-699861 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0914 00:17:20.625909   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-699861 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.399587022s)
--- PASS: TestJSONOutput/start/Command (78.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-699861 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-699861 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-699861 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-699861 --output=json --user=testUser: (7.349675924s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-014283 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-014283 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.249471ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4552d98-a77d-4c48-9de9-ed1991e240b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-014283] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"443d6b3c-7c6a-4a6c-9f85-9c71ea5d2e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"47f18475-d19c-4141-8958-4623addbbd20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7ef11cfe-db08-43e1-b1c0-483abc0807e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig"}}
	{"specversion":"1.0","id":"db408aef-e204-4cec-be81-20f3760fab94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube"}}
	{"specversion":"1.0","id":"28de4a8a-886c-4bf5-b23f-de91cbd82910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0e7cf35f-8e5b-4f29-a5e0-9abb1634b40d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1352eceb-4c26-4250-b357-017c43a80348","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-014283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-014283
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (91.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-836382 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-836382 --driver=kvm2  --container-runtime=crio: (42.244370645s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-845717 --driver=kvm2  --container-runtime=crio
E0914 00:19:31.535134   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-845717 --driver=kvm2  --container-runtime=crio: (46.859926065s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-836382
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-845717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-845717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-845717
helpers_test.go:175: Cleaning up "first-836382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-836382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-836382: (1.010878965s)
--- PASS: TestMinikubeProfile (91.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-119257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-119257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.304302179s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-119257 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-119257 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-130197 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-130197 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.442648273s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-119257 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-130197
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-130197: (1.274565776s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-130197
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-130197: (23.007843256s)
--- PASS: TestMountStart/serial/RestartStopped (24.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-130197 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209237 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 00:22:20.624402   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:34.606357   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209237 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.755029955s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-209237 -- rollout status deployment/busybox: (4.060675659s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-956wv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-mxhwp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-956wv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-mxhwp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-956wv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-mxhwp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-956wv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-956wv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-mxhwp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209237 -- exec busybox-7dff88458-mxhwp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-209237 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-209237 -v 3 --alsologtostderr: (46.973265361s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-209237 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp testdata/cp-test.txt multinode-209237:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237:/home/docker/cp-test.txt multinode-209237-m02:/home/docker/cp-test_multinode-209237_multinode-209237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test_multinode-209237_multinode-209237-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237:/home/docker/cp-test.txt multinode-209237-m03:/home/docker/cp-test_multinode-209237_multinode-209237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test_multinode-209237_multinode-209237-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp testdata/cp-test.txt multinode-209237-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt multinode-209237:/home/docker/cp-test_multinode-209237-m02_multinode-209237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test_multinode-209237-m02_multinode-209237.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m02:/home/docker/cp-test.txt multinode-209237-m03:/home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test_multinode-209237-m02_multinode-209237-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp testdata/cp-test.txt multinode-209237-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3527454802/001/cp-test_multinode-209237-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt multinode-209237:/home/docker/cp-test_multinode-209237-m03_multinode-209237.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237 "sudo cat /home/docker/cp-test_multinode-209237-m03_multinode-209237.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 cp multinode-209237-m03:/home/docker/cp-test.txt multinode-209237-m02:/home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 ssh -n multinode-209237-m02 "sudo cat /home/docker/cp-test_multinode-209237-m03_multinode-209237-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-209237 node stop m03: (1.339315327s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209237 status: exit status 7 (404.337755ms)

                                                
                                                
-- stdout --
	multinode-209237
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209237-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209237-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr: exit status 7 (409.265167ms)

                                                
                                                
-- stdout --
	multinode-209237
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209237-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209237-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:24:29.310746   42798 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:24:29.310863   42798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:24:29.310872   42798 out.go:358] Setting ErrFile to fd 2...
	I0914 00:24:29.310876   42798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:24:29.311030   42798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:24:29.311181   42798 out.go:352] Setting JSON to false
	I0914 00:24:29.311210   42798 mustload.go:65] Loading cluster: multinode-209237
	I0914 00:24:29.311313   42798 notify.go:220] Checking for updates...
	I0914 00:24:29.311590   42798 config.go:182] Loaded profile config "multinode-209237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:24:29.311603   42798 status.go:255] checking status of multinode-209237 ...
	I0914 00:24:29.311996   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.312047   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.331308   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0914 00:24:29.331743   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.332288   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.332308   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.332741   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.332932   42798 main.go:141] libmachine: (multinode-209237) Calling .GetState
	I0914 00:24:29.334441   42798 status.go:330] multinode-209237 host status = "Running" (err=<nil>)
	I0914 00:24:29.334456   42798 host.go:66] Checking if "multinode-209237" exists ...
	I0914 00:24:29.334752   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.334789   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.349808   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0914 00:24:29.350243   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.350713   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.350733   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.351053   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.351223   42798 main.go:141] libmachine: (multinode-209237) Calling .GetIP
	I0914 00:24:29.353656   42798 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:24:29.354040   42798 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:24:29.354071   42798 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:24:29.354191   42798 host.go:66] Checking if "multinode-209237" exists ...
	I0914 00:24:29.354555   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.354589   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.369808   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0914 00:24:29.370258   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.370702   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.370726   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.371051   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.371244   42798 main.go:141] libmachine: (multinode-209237) Calling .DriverName
	I0914 00:24:29.371481   42798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:24:29.371508   42798 main.go:141] libmachine: (multinode-209237) Calling .GetSSHHostname
	I0914 00:24:29.374340   42798 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:24:29.374783   42798 main.go:141] libmachine: (multinode-209237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:e0:16", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:21:46 +0000 UTC Type:0 Mac:52:54:00:bc:e0:16 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-209237 Clientid:01:52:54:00:bc:e0:16}
	I0914 00:24:29.374815   42798 main.go:141] libmachine: (multinode-209237) DBG | domain multinode-209237 has defined IP address 192.168.39.214 and MAC address 52:54:00:bc:e0:16 in network mk-multinode-209237
	I0914 00:24:29.374964   42798 main.go:141] libmachine: (multinode-209237) Calling .GetSSHPort
	I0914 00:24:29.375141   42798 main.go:141] libmachine: (multinode-209237) Calling .GetSSHKeyPath
	I0914 00:24:29.375280   42798 main.go:141] libmachine: (multinode-209237) Calling .GetSSHUsername
	I0914 00:24:29.375440   42798 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237/id_rsa Username:docker}
	I0914 00:24:29.454914   42798 ssh_runner.go:195] Run: systemctl --version
	I0914 00:24:29.461198   42798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:24:29.475031   42798 kubeconfig.go:125] found "multinode-209237" server: "https://192.168.39.214:8443"
	I0914 00:24:29.475070   42798 api_server.go:166] Checking apiserver status ...
	I0914 00:24:29.475111   42798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:24:29.488400   42798 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup
	W0914 00:24:29.497911   42798 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 00:24:29.497978   42798 ssh_runner.go:195] Run: ls
	I0914 00:24:29.501851   42798 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I0914 00:24:29.507059   42798 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I0914 00:24:29.507086   42798 status.go:422] multinode-209237 apiserver status = Running (err=<nil>)
	I0914 00:24:29.507098   42798 status.go:257] multinode-209237 status: &{Name:multinode-209237 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:24:29.507123   42798 status.go:255] checking status of multinode-209237-m02 ...
	I0914 00:24:29.507422   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.507469   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.522671   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0914 00:24:29.523034   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.523559   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.523582   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.523902   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.524079   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetState
	I0914 00:24:29.525544   42798 status.go:330] multinode-209237-m02 host status = "Running" (err=<nil>)
	I0914 00:24:29.525559   42798 host.go:66] Checking if "multinode-209237-m02" exists ...
	I0914 00:24:29.525931   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.525977   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.541058   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0914 00:24:29.541471   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.541979   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.541996   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.542275   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.542418   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetIP
	I0914 00:24:29.545094   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | domain multinode-209237-m02 has defined MAC address 52:54:00:6e:d9:47 in network mk-multinode-209237
	I0914 00:24:29.545507   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:d9:47", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:22:48 +0000 UTC Type:0 Mac:52:54:00:6e:d9:47 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-209237-m02 Clientid:01:52:54:00:6e:d9:47}
	I0914 00:24:29.545541   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | domain multinode-209237-m02 has defined IP address 192.168.39.88 and MAC address 52:54:00:6e:d9:47 in network mk-multinode-209237
	I0914 00:24:29.545711   42798 host.go:66] Checking if "multinode-209237-m02" exists ...
	I0914 00:24:29.546033   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.546069   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.561379   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0914 00:24:29.561862   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.562327   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.562352   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.562702   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.562877   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .DriverName
	I0914 00:24:29.563056   42798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:24:29.563079   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetSSHHostname
	I0914 00:24:29.565428   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | domain multinode-209237-m02 has defined MAC address 52:54:00:6e:d9:47 in network mk-multinode-209237
	I0914 00:24:29.565815   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:d9:47", ip: ""} in network mk-multinode-209237: {Iface:virbr1 ExpiryTime:2024-09-14 01:22:48 +0000 UTC Type:0 Mac:52:54:00:6e:d9:47 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-209237-m02 Clientid:01:52:54:00:6e:d9:47}
	I0914 00:24:29.565852   42798 main.go:141] libmachine: (multinode-209237-m02) DBG | domain multinode-209237-m02 has defined IP address 192.168.39.88 and MAC address 52:54:00:6e:d9:47 in network mk-multinode-209237
	I0914 00:24:29.565953   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetSSHPort
	I0914 00:24:29.566110   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetSSHKeyPath
	I0914 00:24:29.566242   42798 main.go:141] libmachine: (multinode-209237-m02) Calling .GetSSHUsername
	I0914 00:24:29.566344   42798 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19640-5422/.minikube/machines/multinode-209237-m02/id_rsa Username:docker}
	I0914 00:24:29.646729   42798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:24:29.659487   42798 status.go:257] multinode-209237-m02 status: &{Name:multinode-209237-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:24:29.659519   42798 status.go:255] checking status of multinode-209237-m03 ...
	I0914 00:24:29.659937   42798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 00:24:29.659977   42798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 00:24:29.675687   42798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0914 00:24:29.676131   42798 main.go:141] libmachine: () Calling .GetVersion
	I0914 00:24:29.676626   42798 main.go:141] libmachine: Using API Version  1
	I0914 00:24:29.676648   42798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 00:24:29.676947   42798 main.go:141] libmachine: () Calling .GetMachineName
	I0914 00:24:29.677149   42798 main.go:141] libmachine: (multinode-209237-m03) Calling .GetState
	I0914 00:24:29.678704   42798 status.go:330] multinode-209237-m03 host status = "Stopped" (err=<nil>)
	I0914 00:24:29.678718   42798 status.go:343] host is not running, skipping remaining checks
	I0914 00:24:29.678724   42798 status.go:257] multinode-209237-m03 status: &{Name:multinode-209237-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 node start m03 -v=7 --alsologtostderr
E0914 00:24:31.534941   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-209237 node start m03 -v=7 --alsologtostderr: (39.422648743s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-209237 node delete m03: (1.786491631s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209237 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 00:34:31.534924   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209237 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m5.036537391s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209237 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (185.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209237
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209237-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-209237-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.256056ms)

                                                
                                                
-- stdout --
	* [multinode-209237-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-209237-m02' is duplicated with machine name 'multinode-209237-m02' in profile 'multinode-209237'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209237-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209237-m03 --driver=kvm2  --container-runtime=crio: (39.900049611s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-209237
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-209237: exit status 80 (207.054416ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-209237 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-209237-m03 already exists in multinode-209237-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-209237-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.17s)

                                                
                                    
x
+
TestScheduledStopUnix (113.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-054201 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-054201 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.742974626s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054201 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-054201 -n scheduled-stop-054201
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054201 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054201 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054201 -n scheduled-stop-054201
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-054201
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054201 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-054201
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-054201: exit status 7 (65.547023ms)

                                                
                                                
-- stdout --
	scheduled-stop-054201
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054201 -n scheduled-stop-054201
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054201 -n scheduled-stop-054201: exit status 7 (66.205979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-054201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-054201
--- PASS: TestScheduledStopUnix (113.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (196.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1315420211 start -p running-upgrade-482471 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1315420211 start -p running-upgrade-482471 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m44.810476275s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-482471 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0914 00:47:03.693294   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-482471 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.472529939s)
helpers_test.go:175: Cleaning up "running-upgrade-482471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-482471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-482471: (1.228048299s)
--- PASS: TestRunningBinaryUpgrade (196.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.145446ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-444049] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444049 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444049 --driver=kvm2  --container-runtime=crio: (1m36.6162143s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444049 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-670449 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-670449 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.586815ms)

                                                
                                                
-- stdout --
	* [false-670449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:42:07.440208   50463 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:42:07.440349   50463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:42:07.440361   50463 out.go:358] Setting ErrFile to fd 2...
	I0914 00:42:07.440367   50463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:42:07.440600   50463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5422/.minikube/bin
	I0914 00:42:07.441246   50463 out.go:352] Setting JSON to false
	I0914 00:42:07.442233   50463 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5073,"bootTime":1726269454,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 00:42:07.442347   50463 start.go:139] virtualization: kvm guest
	I0914 00:42:07.444790   50463 out.go:177] * [false-670449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 00:42:07.446438   50463 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:42:07.446481   50463 notify.go:220] Checking for updates...
	I0914 00:42:07.449721   50463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:42:07.451558   50463 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5422/kubeconfig
	I0914 00:42:07.453135   50463 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5422/.minikube
	I0914 00:42:07.454595   50463 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 00:42:07.455967   50463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:42:07.458047   50463 config.go:182] Loaded profile config "NoKubernetes-444049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:42:07.458144   50463 config.go:182] Loaded profile config "force-systemd-env-451535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:42:07.458254   50463 config.go:182] Loaded profile config "offline-crio-363063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:42:07.458340   50463 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:42:07.496051   50463 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 00:42:07.497070   50463 start.go:297] selected driver: kvm2
	I0914 00:42:07.497083   50463 start.go:901] validating driver "kvm2" against <nil>
	I0914 00:42:07.497103   50463 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:42:07.498920   50463 out.go:201] 
	W0914 00:42:07.499918   50463 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0914 00:42:07.500858   50463 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-670449 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-670449

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-670449"

                                                
                                                
----------------------- debugLogs end: false-670449 [took: 2.664447059s] --------------------------------
helpers_test.go:175: Cleaning up "false-670449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-670449
--- PASS: TestNetworkPlugins/group/false (2.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (68.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m7.554385736s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444049 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-444049 status -o json: exit status 2 (237.211992ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-444049","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-444049
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (68.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444049 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.095020182s)
--- PASS: TestNoKubernetes/serial/Start (27.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444049 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444049 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.398648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-444049
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-444049: (1.2843736s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444049 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444049 --driver=kvm2  --container-runtime=crio: (1m2.710581366s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444049 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444049 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.853186ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.598700304 start -p stopped-upgrade-184065 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.598700304 start -p stopped-upgrade-184065 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.026922698s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.598700304 -p stopped-upgrade-184065 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.598700304 -p stopped-upgrade-184065 stop: (1.430565905s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-184065 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-184065 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.717031178s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.18s)

                                                
                                    
x
+
TestPause/serial/Start (94.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-609507 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-609507 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.86011468s)
--- PASS: TestPause/serial/Start (94.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.304940916s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-184065
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (114.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m54.830967642s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (114.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rkfss" [fbc3a02c-ba24-4168-8cd8-67ff150e0191] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rkfss" [fbc3a02c-ba24-4168-8cd8-67ff150e0191] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005035498s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m22.870981314s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mfjws" [88e2cf5a-e860-4a9a-9e08-df2a680b8912] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005054352s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gzzcv" [af5c8591-4968-4600-8fbb-001e100e6606] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gzzcv" [af5c8591-4968-4600-8fbb-001e100e6606] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003761006s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m11.947519069s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m1.513269068s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-56g98" [a457f491-6ebd-432b-b7ac-e626a00b75d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006604255s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dbd9h" [de4c3623-d95f-4185-accb-674a842a2cfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dbd9h" [de4c3623-d95f-4185-accb-674a842a2cfd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.054656433s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-thl6j" [970f1094-daf2-488e-90c1-10ccf3b1b7be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-thl6j" [970f1094-daf2-488e-90c1-10ccf3b1b7be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004867275s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m8.830066907s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mkzmv" [e28d4ba1-4d98-4029-9f79-cec9734a6fd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mkzmv" [e28d4ba1-4d98-4029-9f79-cec9734a6fd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003842405s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-670449 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.873229225s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5kk9z" [16bd2af7-5377-41a0-b0e1-fea333602e6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004820439s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wphvs" [c429aa2e-fdfb-4ee4-9ad4-e23d7e7c206f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wphvs" [c429aa2e-fdfb-4ee4-9ad4-e23d7e7c206f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006070551s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-057857 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-057857 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m48.620393206s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-670449 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-670449 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vw7z4" [f67ae264-247c-4c40-9815-8dabf07c1377] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vw7z4" [f67ae264-247c-4c40-9815-8dabf07c1377] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004424976s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-670449 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-670449 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170612141s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-670449 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754332 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754332 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m8.718173637s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-670449 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0914 01:23:09.409122   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-617306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 00:54:31.535145   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.446794   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.453159   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.464638   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.486768   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.528191   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.609657   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:37.771197   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:38.093033   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:38.734952   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:40.016436   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:42.578431   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:54:47.700453   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-617306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (56.536587404s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f69c0db5-0c45-4cca-97bd-61c6f289bc84] Pending
helpers_test.go:344: "busybox" [f69c0db5-0c45-4cca-97bd-61c6f289bc84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 00:54:57.941779   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f69c0db5-0c45-4cca-97bd-61c6f289bc84] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004122476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-754332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-754332 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-617306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-617306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111329513s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-617306 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-617306 --alsologtostderr -v=3: (10.590926977s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-057857 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40542d8f-fc1e-4695-b7b9-467ae3bb4f00] Pending
helpers_test.go:344: "busybox" [40542d8f-fc1e-4695-b7b9-467ae3bb4f00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 00:55:11.671917   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.678346   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.689734   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.711276   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.752675   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.834103   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:11.995872   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:12.317162   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:12.958680   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:14.240805   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [40542d8f-fc1e-4695-b7b9-467ae3bb4f00] Running
E0914 00:55:16.802639   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:55:18.423731   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005600664s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-057857 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-057857 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-057857 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617306 -n newest-cni-617306
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617306 -n newest-cni-617306: exit status 7 (63.805518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-617306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-617306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-617306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.783813023s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617306 -n newest-cni-617306
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-617306 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-617306 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617306 -n newest-cni-617306
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617306 -n newest-cni-617306: exit status 2 (225.822499ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617306 -n newest-cni-617306
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617306 -n newest-cni-617306: exit status 2 (231.751176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-617306 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617306 -n newest-cni-617306
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617306 -n newest-cni-617306
E0914 00:55:59.385082   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880490 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 00:56:25.781907   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:25.788333   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:25.799802   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:25.821244   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:25.862748   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:25.944165   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:26.105731   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:26.427926   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:27.069340   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:28.351375   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:30.913034   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:33.610496   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:36.034310   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:46.276285   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880490 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (52.417760349s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-880490 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f3e9a62c-f6d6-4fb8-bb58-13444f20ce95] Pending
helpers_test.go:344: "busybox" [f3e9a62c-f6d6-4fb8-bb58-13444f20ce95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 00:56:58.155872   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.162278   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.173669   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.195092   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.236517   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.318612   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.480202   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:56:58.801916   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f3e9a62c-f6d6-4fb8-bb58-13444f20ce95] Running
E0914 00:56:59.443941   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:00.725728   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:57:03.287281   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003975218s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-880490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-880490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-880490 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (634.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754332 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 00:57:39.132790   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754332 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m33.779418775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754332 -n default-k8s-diff-port-754332
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (634.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (604.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-057857 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 00:57:55.532174   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.408546   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.415015   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.426539   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.448068   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.489650   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.571154   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:09.732802   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:10.054674   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:10.696270   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:11.978143   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:14.539949   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:19.662206   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:20.094490   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.787841   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.794249   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.806488   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.827890   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.869339   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:27.950867   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:28.112464   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:28.434439   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:28.804967   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:29.076668   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:29.903631   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:30.358808   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:32.920754   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:38.042552   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:48.284339   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:58:50.385351   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-057857 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m4.712822452s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-057857 -n no-preload-057857
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (604.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-431084 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-431084 --alsologtostderr -v=3: (2.369836553s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431084 -n old-k8s-version-431084: exit status 7 (63.736543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-431084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (495.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-880490 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 00:59:37.446940   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:42.016711   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:49.727018   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:50.726916   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:00:05.148774   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:00:11.671663   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:00:39.374181   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:00:53.269312   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:01:11.649345   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:01:25.781312   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:01:53.485340   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:01:58.156199   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:06.865997   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:20.623922   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:25.858413   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:34.568979   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:03:09.408599   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:03:27.787413   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:03:37.111124   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:03:43.695420   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:03:55.490736   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/bridge-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:04:31.535216   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/addons-473197/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:04:37.446995   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/auto-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:05:11.671774   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/kindnet-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:06:25.781944   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/calico-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:06:58.156015   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/custom-flannel-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:07:06.865615   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/enable-default-cni-670449/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:07:20.624498   12602 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-5422/.minikube/profiles/functional-383860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-880490 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m15.713468276s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-880490 -n embed-certs-880490
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (495.97s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.84
263 TestNetworkPlugins/group/cilium 3.12
278 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-670449 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-670449

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-670449"

                                                
                                                
----------------------- debugLogs end: kubenet-670449 [took: 2.692868689s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-670449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-670449
--- SKIP: TestNetworkPlugins/group/kubenet (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-670449 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-670449" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-670449

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-670449" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670449"

                                                
                                                
----------------------- debugLogs end: cilium-670449 [took: 2.981632741s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-670449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-670449
--- SKIP: TestNetworkPlugins/group/cilium (3.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-817727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-817727
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard